“Citizen surveillance and autonomous weapons deserved more deliberation” OpenAI robotics director resigns

A week ago we were just saying that “A dead king, a king“: Anthropic passage to pure ostracism after being considered a “risk to the supply chain” of the United States practically overlapped with the announcement of the US Defense Administration agreement with OpenAI in record time. Behind the scenes: the reasons for the no from the company led by Dario Amodei and the unknown of the terms of that agreement that installs ChatGPT on the Pentagon computers. A few days later, Caitlin Kalinowski says goodbye at his position at OpenAI, citing the military use of artificial intelligence as the reason. The resignation. Caitlin Kalinowski, head of the OpenAI robotics team since November 2024, announced her departure from the company a few hours ago in publications from X and from LinkedIn. He makes it clear that his decision is about principles and not people and expresses respect for Sam Altman and the team. In his brief statement there are two lines that, in his opinion, the company did not think about enough internally: The surveillance of American citizens without judicial supervision. Autonomous weapons capable of firing without human supervision. Tap to go to the post Context. The resignation occurs in the midst of Anthropic’s departure from the Pentagon (the transition will last six months), the entry of OpenAI and in the midst of a debate about how far AI companies should go in their collaboration with the US military establishment: Anthropic stood before the Pentagon drawing strict lines on domestic surveillance and autonomous weapons. OpenAI reached an agreement with the Department of Defense to deploy its models on a classified government network in a move that has been interpreted as opportunistic. According to the company led by Altman, the agreement excludes domestic surveillance and autonomous weapons, but the damage to its reputation had already been done: thousands of people uninstalled ChatGPT by way of cancellation. Why it is important. The goodbye of Caitlin Kalinowski is the first public and nominative resignation from a senior position at OpenAI motivated by ethical disagreements over the military use of AI explicitly. And this sets a precedent in the industry insofar as it exposes the internal fracture in the most influential company in the sector, placing OpenAI in a delicate situation before those who use its tools, its staff and also before society. And finally, it makes more clear than ever the need to legislate on artificial intelligence and its civil and military uses. Maybe Europe is behind in the AI ​​battlebut a long time ago he set about the arduous task of establish a regulatory framework. Which Kalinowski does not say. In the comments of her post on Kalinowski does not say it clearly, but when an agreement of this magnitude has already been signed and its CEO makes it publicthere is no room for much maneuver from within: resigning with a public statement like yours is one of the few pressure maneuvers left to exert. Consequences. For OpenAI, the pressure is growing and it faces more departures and more cancellations if it does not clearly show what its red lines are in a credible and verifiable way: the militarization of AI is something we are experiencing in real time. For the AI ​​industry, it is more fuel on the fire of the self-regulation debate. And Anthropic gains reputation, although in the short term it has lost an important agreement and its new status may put its existence in check. In Xataka | The US has decided to shoot itself in the foot and destroy one of the best AI companies in the country In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building Cover | Caitlin Kalinowski

China is clear about who should lead the advances of its best AI and robotics companies: Generation Z

Those who now enter the labor market find themselves with a rival that is difficult to beat: they have no agreement or need for rest or fulfillment. In addition, it does the tasks of junior profiles quite well: artificial intelligence is limiting the landing of Generation Z in the offices. in the United States, we have seen it in the UK and also in the Big Four that make up the Madrid skyline. Replacing those who start working with AI has been revealed as the West’s formula to boost productivity… from the point of view of the bosses. If you have to fight with her and validate her, not so much anymore. But it is by no means the only way, nor does it happen to everyone. In fact, China is betting just the opposite: it is turning Generation Z and millennials into heads of areas as strategic as robotics or artificial intelligence itself. They are not just any young people: they are true galacticos, their best assets. Give me someone young. As collect TechAsiaa trend is emerging in China: that of hiring millennials and young people from generation Z for positions with high-level technical profiles in large AI and robotics companies. The best example is Vinces Yao Shunyu: at 28 years old he has already been at OpenAI. A couple of months ago he returned to his native China to become the chief scientist of Tencent. He now reports directly to the CEO. Shunyu’s is just the tip of the iceberg of this new organizational strategy of Chinese companies. There are other cases, such as that of Luo Jianlan, formerly of Google since a year the chief scientist of AgiBot. Or of Dong Haochief scientist at PrimeBot after earning his PhD at Imperial College. By the way, OpenAI and Meta have copied the recipe: the first with Polish Jakub Pachocki and the second, with the Chinese Zhao Shengjia. They are scientists, but they could just as well be professional footballers: none of them are over 35 years old. Why is it important. When thinking about a boss within a modern business structure of a certain size, it is inevitable that team management, meetings and bureaucracy come to mind. However, this strategy of Chinese big tech is deliberately different from what we have in the West and is based on three reasons that SMCP explains: Institutional separation of research vs. product. A chief scientist looks to the future, he does not manage human teams or budgets. Competitive advantage in a saturated market, allowing you to build your own technologies without depending on third parties. If you have the best at home, you don’t have to ask for permission or sign abroad. The top youth asset. AI is evolving by leaps and bounds and with this movement, China is ensuring that it has those who have been at ground zero of the great milestones of recent years: elite universities or laboratories of renowned institutions such as OpenAI, Google or Princeton. China is a world source of engineers. That China is a country of engineers is no secret: it is a plan that has been underway for 4o years. In fact, now he has opted to go one step further and accelerate doctorates. The Chinese labor market is already showing signs of some saturationwhich has also brought diversification, changing routes to avoid even setting foot in the university in its new bet on FP. In any case, having an army of almost six million engineering professionals gives you an advantage with AI. And it has more than enough: it has engineers to export. Without going any further, the vast majority of signings of the Meta superintelligence team from last year they are Chinese. But young engineers who stay at home have an opportunity beyond joining a leading company in the sector: leading it. Disclaimer: a chief scientist is not a CTO. It is worth remembering a difference between positions that are often confused: a chief scientist is not the director of technology. While the first profile investigates, explores and plans in the medium and long term without touching products or marketing, the second manages teams, designs architecture and meets business objectives. Confuse both profiles or mix them, as the SMCP remembers what Alibaba or Baidu did, ends up subordinating science to the urgency of the market. In any case, it is a fragile position in a company that is not clear why it is needed. In Xataka | China looks at VET: why more and more generation Z students prefer trades over university degrees In Xataka | If Spain wants to imitate China and be a “country of engineers”, this map reveals the extent to which it has a problem Cover | and Hyundai Motor Group and cottonbro studio

We are entering a new era of robotics driven by AI and Disney is its perfect showcase

For decades, Disney has been a pioneer in bringing its characters to life through animatronics, an already classic part of its theme parks that gives them that ‘magic’ that dazzles children and not so children. However, for some time now they have been working on going further with the help of the latest advances in robotics and AI so that the experience ends up being even more authentic. For this reason recently announced that Olaf, the little snowman from the Frozen franchise, would arrive in its parks as the first completely autonomous robotic character. As the company announced, Olaf will debut in the parks of Hong Kong and Paris during 2026. The interesting thing is that here we are not talking about a simple automaton, but rather its engineers have applied reinforcement learning and used the latest advances in robotics to accurately replicate the character’s movements. Olaf’s internal parts A controlled scenario. The robotics that coexist with us beyond experimentation have traditionally been anchored to functional and specific objectives, from industrial robots to quadrupeds that traverse complex terrain. Disney knows that there is a niche where they can take advantage of the capabilities of this technology to ‘give life’ to their characters and, how could it be otherwise, continue selling tickets to their parks. In this sense, theme parks become perfect settings for experimentation and development of advanced robotics, since they are controlled environments where robots can interact with thousands of people every day, learn from those interactions and perfect their behaviors, always with supervision. The technical challenge that Olaf poses. According to the paper published by Disney Research Hub (and the interesting video published on his channel), creating Olaf posed certain problems. The character has a huge head supported by a tiny neck, small feet with no visible legs, and a walking style that does not respect real physical laws. To solve this, the engineers designed a system of asymmetrical legs (one inverted with respect to the other) hidden under a polyurethane foam “skirt” that simulates its snow body. This skirt not only conceals the internal mechanics, but absorbs impacts and allows for recovery steps without breaking the visual illusion. Reinforcement learning scheme that applies policies to modify your behavior Just like they explain To the engineers responsible for its development, each facial joint, from the eyes to the jaw, is controlled by spherical and flat mechanical links that allow for full expressiveness while keeping tiny actuators hidden beneath the disguise. The key: thereinforcement learning. Instead of manually programming each move, the team trained Olaf using reinforcement learning guided by reference animations created by artists. According to explained Kyle Laughlin, senior vice president of Walt Disney Imagineering, told Variety “a process that used to take years can now be done in days and weeks.” Laughlin account that the system generates millions of simulations where the robot learns to walk, maintain balance and emulate gestures exactly as a child learning to move would do. But it’s not just about walking, since the AI ​​must also capture that spark of personality that makes the character recognizable. And for this, those responsible explain that specific rewards were used that rewarded the precise imitation of the original animated cycle. Noise and temperature. Two technical obstacles that threatened to ruin the robot’s credibility. On the one hand, the sound, since the robotic steps were too mechanical and noisy. According to they count Those responsible introduced an additional reward during training that penalized sudden changes in the vertical speed of the foot when touching the ground. In this way they managed to reduce the average noise of each footfall from almost 82 dB to just 64 dB, all without significantly compromising their gait. The second problem was overheating. And its thin neck houses small actuators that must support the weight of its large head, also covered by an insulating suit. The solution involved feeding real-time temperature data to the AI ​​system using a thermal model integrated into the simulation. Thus, when the actuators approach the 80°C limit, the system subtly adjusts the posture to reduce engine torque before any damage is done. A collaborative ecosystem accelerated by Newton. Behind the technological leap is Newton, a physics engine jointly developed by NVIDIA, Google DeepMind and Disney Research announced during GTC 2025 last March. “This is how we are going to train robots in the future,” counted Jensen Huang himself, CEO of NVIDIA, at the last GTC conference showing the technology. Newton allows you to accurately simulate how robots interact with deformable objects such as fabric or food, something crucial for costumed characters like Olaf, and is designed to integrate with MuJoCo, the physics engine already used by Google DeepMind to simulate complex joint movements. From BDX to Olaf. The Star Wars-inspired bipedal BDX droids, which debuted in Galaxy’s Edge in fall 2023 and have since appeared at events like SXSW or even filming scenes for the upcoming “Mandalorian and Grogu“, were Disney’s initial step towards this technology. According to Laughlin, the company has “a solid roadmap” to deploy more autonomous characters with greater expressiveness and interactivity in theme parks and cruise ships. This idea is foreseen in the plan announced by Disney for invest 60 billion dollars over the next decade on new attractions. Valuable data. The arrival of this type of technology to its parks It also provides them with reusable infrastructure. And the techniques used in Olaf, such as the compact asymmetric design, its thermal systems or its control based on acoustic reduction, can also be applied to future characters with equally strange morphologies. In addition, it must be taken into account that the robots would operate daily under the public eye at all times, something that becomes an advantage, since each interaction generates valuable data on how to improve their behavior. In the face of what seems to be an imminent arrival of new humanoid robots powered by AI, Disney can end up being a very profitable customer in this new era … Read more

This is what robotics needs to look like the movies

Imagine an AI that not only answers questions, but can imagine scenarios, predict consequences, or plan actions before executing them. This is precisely what world models promise, a technology that is attracting attention from the main artificial intelligence laboratories and that could radically change how machines understand and interact with their environment. What exactly are they. World models are AI systems that build an internal representation of the environment, as if they contained a simulation of the real world. Unlike traditional supervised learning, which simply maps inputs to outputs using labeled data, these models learn how an environment works and they can predict what will happen next. It is similar to how humans use mental simulations to anticipate outcomes without needing to physically experience each situation. The example of the batter. Researchers David Ha and Jürgen Schmidhuber they explain it With a sports analogy: a baseball batter has just milliseconds to decide how to hit the ball, less time than it takes for the visual signal to reach the brain. What allows him to hit a fastball at 100 miles per hour is his ability to instinctively predict where the ball will go. Your muscles react reflexively based on the predictions of your internal mental model, without the need to consciously plan every possible scenario. Why they matter now. Prominent figures such as Yann LeCun (Meta), Demis Hassabis (Google DeepMind) and Yoshua Bengio (Quebec AI Institute) consider that these models are essential to building truly intelligent systems. The startup World Labs of Fei-Fei Li, one of the most influential figures in AI, raised last year 230 million dollars to develop them. On the other hand, General Intuition, a new AI lab owned by Medal (known for its app for recording and sharing game clips), just got a financing round of 133.7 million. The investment came primarily from Khosla Ventures founder Vinod Khosla (one of OpenAI’s early investors), who affirms that “multiple companies valued at hundreds of billions, potentially even trillions of dollars will be built” in this field. How they work. These systems have three fundamental capabilities. On the one hand, they compress complex sensory data (images, videos, text) into simpler representations. Second, they predict future states of the environment based on past and present information. Third, they use that learned model to simulate different actions and choose the best option. It is as if the AI ​​can “dream” different scenarios before acting. The case of video games. Ha and Schmidhuber also have a clarifying example To do this: imagine an AI learning to play a racing game. Instead of memorizing sequences of moves, you first build an internal model of how the game world behaves: how the car moves, how the road curves, where obstacles appear. You can then imagine future scenarios, testing different driving strategies in your simulated world before applying them in the real game. Promising applications. world models They are already transforming several fields. In autonomous driving, they allow vehicles to simulate traffic dynamics and pedestrian behavior to make safer decisions. In robotics, robots can imagine different ways to complete a task before executing it, especially useful when real-world training is expensive or dangerous. And in video generation, help create more realistic content: A model that understands why a ball bounces is going to represent it better than one that has simply memorized patterns. Beyond the video. A better video generation model would be just the beginning. LeCun describe how a world model could help achieve goals through reasoning: Given a video of a messy room and the goal of cleaning it, you could devise a sequence of actions (vacuuming, cleaning the dishes, emptying the trash) not because you have observed that pattern, but because you understand at a deeper level how to go from dirty to clean. “We need machines that understand the world, that can remember things, that have intuition and common sense,” affirms. The obstacles ahead. Train and run world models requires massive computing powereven compared to current generative models. Although right now thousands and thousands of GPUs are needed cloistered in gigantic data centers that They consume a lot of energy to run current models, training world models is another level. Furthermore, like all AI models, they also have the risk of hallucinate and internalize biases from your training data. The industry’s bet. Despite the technical challenges, there are different strategies in place. Google DeepMind and OpenAI they bet because with enough multimodal training data (video, 3D simulations and beyond text) a world model will spontaneously emerge within a neural network. LeCun, for his part, believe that a completely new, non-generative AI architecture will be necessary. What comes next. Several experts also predict that world models will allow you to create interactive 3D worlds on demand for video games, virtual photography and other applications. According to Justin Johnson, co-founder of World Labs, “we already have the ability to create virtual, interactive worlds, but it costs hundreds of millions of dollars and a lot of development time.” They could also revolutionize robotics by giving robots real awareness of their environment and their own body. As resume Mashrabov, “with an advanced world model, an AI could develop a personal understanding of any scenario it finds itself in and begin to reason out possible solutions.” Although LeCun esteem that we are still at least a decade away from the world models he imagines, the great expectation of the industry to see evolutions in the field of AI and the monstrous investment that this phenomenon is receiving, indicate that this technology could be the next great leap towards machines that not only react to the world, but understand and model it. Cover image | Michael Marais In Xataka | “The safety of our children is not for sale”: the first law that regulates ‘AI friends’ is here

Alibaba has one of the best open source AI models. Your next step: use it in robotics

Alibaba has taken another step in its commitment to artificial intelligence by creating an internal team dedicated to roboticswhich will operate from qwenits AI modeling division. The Chinese giant, owner of one of the best open source AI models, now wants the Qwen team to know how to apply its knowledge in robotics, a sector that is beginning to awaken interest, not only in industry, but also with the arrival of projects in the domestic sphere. Who leads the project. Justin Lin, technology manager at Qwen and expert in multimodal models (capable of processing text, sound and images), was the one who has confirmed the creation of this “small team for robotics and embodied AI” through their social networks. Lin has worked on the development of the Qwen models, which are currently among the most popular in open source globally. The vision behind the movement. According to Linmultimodal AI models are evolving into “fundamental agents” capable of performing complex long-term reasoning tasks thanks to reinforcement learning. “They should definitely make the leap from the virtual world to the physical world,” he said. explained the manager, making clear the intention to apply these technologies in tangible devices. Alibaba’s big bet. This announcement is part of Alibaba’s broader strategy in the sector. Last month, the company led a financing round of 140 million dollars at X Square Robot, a Chinese robotics startup. In addition, its CEO Eddie Wu esteem that global investment in AI will reach $4 trillion in the next five years, a figure that reflects the sector’s expectations. Global competition. Alibaba is not alone in this race. Nvidia and SoftBank are also making significant moves in smart robotics. SoftBank just announced the acquisition of ABB’s industrial robots business for $5.4 billion, while Nvidia CEO Jensen Huang has qualified the combination of AI and robotics as a “multi-billion dollar” long-term growth opportunity. China is also the world’s leading power in the robotics sector. And only in 2024, Chinese factories installed nearly 300,000 industrial robotsa figure higher than the rest of the world combined. The Qwen factor. The choice to place this team within Qwen makes all the sense in the world. Seven models of the Qwen series are currently listed in the top 10 Hugging Facewith the multimodal model Qwen3-Omni occupying first place. This strength in AI provides the company with a solid foundation to develop advanced robotic applications based on the journey they already have with Qwen. Cover image | zhang hui and Possessed Photography In Xataka | AI companies have just encountered an unexpected challenge: insurers have started to turn their backs on them

The Irobot co -founder believes that there is a robotics bubble

Rodney Brooks believes that humanoid robots are a bubble condemned to explode. Anyone says it: Brooks was the co -founder of Irobot, the company that manufactures the famous robots aspirations of the Roba family. Too nice to be true. This expert, who before Irobot worked for decades at MIT, does not believe that in the future we live surrounded by human robots. Observe skepticism the developments of companies such as Tesla or Figure, who work in robots that learn to move as humans. In a new essay He talks about this type of way of thinking about the future “is pure fantasy.” The bottleneck of skill. In his opinion, the problem is that trying to imitate the skill of movement of a human hand – for example – is an almost impossible mission. Especially since there are 17,000 specialized tactile receptors (and that detect pressure, vibration, texture or sliding) that it is not possible to find in humanoid robots. There is, however, concrete advances in this area. Insufficient training. According to Brooks, “we don’t have that kind of tradition for touch data.” This area is different from what has been achieved with other areas such as language recognition or image processing. In his essay he explains how learning based on visual videos of humans performing tasks are not enough for robots to acquire that skill. An experiment. To reinforce his theory, Brooks commented on how in an experiment a person was anesthetized the fingertips to analyze the skill of his hands. In this experiment it was seen how the person took four times more to complete a simple task such as lighting a match. The touch sensation, says this expert, is irreplaceable. Tree goes. But it also warns of the security risks posed by these robots. Keeping them standing requires a lot of energy, he says, and if they fall they can end up being A real risk. The reason is that as explained by the kinetic energy of its limbs, it is amplified by the Law of Scale. Robots with tweezers. For him the “humanoid robots” of the future will be of everything but humanoids. Instead in 15 years what we will see are robots with wheels, several arms, industrial tweezers and specialized sensors. The huge current investments that technology companies are making will not crystallize in that theoretical mass production of humanoid robots. China does believe in humanoid robots. Brooks’ arguments are powerful, but the truth is that China is demonstrating have an absolute faith in it future of this segment. The current humanoid robots are limited in their benefits and capacity, but the investment in this market and the advances that are being made are undeniable. What will have to be verified is whether that human skill and tactile perception end up in effect insurmountable obstacles for such robots. In Xataka | China has just opened the first megatienda of humanoid robots. What comes later promises even more

The latest in Robotics of South Korea is not humanoid or works in factories. Does something out of the ordinary: Parkour

The physical state. As detailed in A video available on YouTube, the first step is taken by the planner, which generates possible routes from a map of the environment. That map is continuously updated with sensor and simulations data. Then, a neuronal network rules out risky options and stays with the most efficient. The tracker, on the other hand, guides the precise movements of the robot. It was trained through reinforcement learning, a technique based on trial and error, which prepared it to adapt to dynamic and challenging scenarios. To save calculation time, Raibo reuses their own footprints: the hind legs step on where they did the front before. Raibo training simulation As they count, the robot was able to run on irregular surfaces, overcome stones, cross inclined ramps, climb stairs and even jump gaps of more than one meter. It reached a speed of 2.7 meters per second. And the most surprising: if the goal moved, the robot detected and recalculated its route without help, without stopping and without losing control. Meanwhile, robotics does not stop in the rest of the world. Raibo’s advance is not an isolated case. It is part of a global wave of developments in which robotics and AI are more and more intertwined. Without ia, robots would continue to be little more than a set of sensors and engines. With AI, they are able to interpret their environment, make decisions and execute complex tasks with autonomy. Companies like Google are betting on it. With Gemini Roboticstheir last great project, have designed a system capable of controlling different types of robots in real time, understanding human language, pointing to 3D objects and adapting to new situations without prior training. The search engine giant says that his performance in unforeseen tasks doubles that of previous models. For now, this technology is in the test phase, but Google already collaborates with companies such as Apptronik or Boston Dynamics to integrate it into advanced humanoids. China is also accelerating. And it is not the only region that is investing strong in this direction. In China, humanoid robots not only train: they compete. A few weeks ago, The country celebrated a Kickboxing tournament Between four G1 robots by Unitree Robotics. He was broadcast live and showed how these machines were able to dodge blows, get up alone after falling and continue fighting with surprising agility. They are 35 kilos robots and up to 23 degrees of freedom, designed with state -of -the -art sensors, and according to organizers, new multisport competitions are already in preparation. Robots developed in China in a boxing ring And there are already robots working in real factories. Meanwhile, in the United States, some humanoid robots have left the laboratory and are entering real factories. One of them is Figure 01, that has long worked in a BMW plant in South Carolina. This robot, developed by the Figure company, can open doors, climb stairs and manipulate objects autonomously. Of course, it still moves slowly and needs to be connected by cable permanently. Parkour as the advancement of the future. All this helps to understand why Raibo’s case is so fascinating. It is not humanoid, nor has it been created for industry or home. But it shows that, combining real -time decision algorithms with light hardware and advanced training, it is possible to create machines that not only execute orders, but also improvise and have an agility that causes vertigo. Touch to wait to know how these advances find a place in really useful applications. There will be the real leap. Images | Robotics & Artificial Intelligence Lab In Xataka | Nvidia desperately seeks engineers for its Taiwan R&D center. They even accuse you of “stealing them” to TSMC

Today it is positioned as the capital of robotics

Odense’s industrial identity was marked by his shipyards. For almost a century, their engineers built some of the most advanced container in the worldincluding the giants of the Mærsk E class, which at the time were the largest load ships ever built. But the Danish naval industry had been losing ground for years. Since the end of 1970, the sector suffered a gradual replication as naval construction moved to South Korea, Japan and China, where production costs were significantly lower. To contain the crisis, the Danish government promoted state subsidies, export credits and strategic orders, but the trend was unstoppable: between 1977 and 1985, the market share of European shipyards fell from 41 % to 18 %, while Asia’s went from 46 % to 70 %, with China emerging as a key actor. They are data that appear in ‘Transforming an industry in decline‘, an analysis of Thomas Roslyng Olesen about the fall of the Danish shipyards. Odense was not immune to this change. Until the end of the 2000s, Mærsk had built many of its ships in the Odense Steel Shipyard, but the growing competition of Asian shipyards led the company to rethink its strategy. How Taipei Times collectsin 2011 Maersk commissioned Daewoo Shipbuilding & Marine Engineering (DSME), in South Korea, the construction of its new Triple-E Class Porter. What could have been the industrial collapse of the city became a turning point. Denmark could not compete in Costs with Asia, but found an alternative in the technological niches of high added value. Instead of building ships, The local industry began to develop more innovative marine enginesSoftware for port automation and advanced thermal systems. Odense soon follow that path. His conversion did not occur from one day to another, nor was it the result of a perfectly executed master plan. It was, rather, an emergency response. Without shipyards or large naval contracts, the city had to look for an alternative. Public investment helped, universities put their part and the industrial ecosystem did what he could with the tools he had. Robotics and automation They seemed a promising way, a way to take advantage of technical knowledge inherited from the naval industry to build something new. Universal Robots offices in Odense But transforming a city is not easy. It is not enough to attract startups or put tax incentives. Talent must be generated, convince companies to bet on staying and, above all, demonstrate that there is a market willing to sustain everything in the long term. Odense, precisely, is in this phase. His old industrial heart is filled with companies that seek to make their way to robotics, such as Universal Robots and Mobile Industrial Robots (MIR)two of the most prominent firms that have been born in this ecosystem. Universal Robots has specialized in cobots, collaborative robots designed to work with humans in factories, without the need for safety barriers or complex programming. Unlike traditional industrial robots, which are usually confined in cells and operate with strength and speed for repetitive tasks, cobots are designed for Direct interaction with human operators. Do not confuse them with humanoid robots. Mir, meanwhile, has opted for autonomous mobile robots, machines capable of moving through warehouses and logistics centers transporting goods. A technological cluster in full boom The growth of companies such as Universal Robots has not happened in a vacuum. One of the keys to Odense’s transformation has been the development of A technological cluster specialized in roboticsthat today is one of the most dynamic in Europe. In every Denmark there are more than 300 companies dedicated to robotics and automation, and more than 160 are based in Odense. This ecosystem began to take shape between 1980 and 1990, when it began to be experimented with robotic technology in Odense’s shipyards, but its real consolidation came in the last two decades. Since 2015, the number of companies in the cluster had grown 50% In 2020, According to Odense Robotics Insight Report. In the center of this network is the University of Southern Denmark (SDU), which not only brings talent to companies in the sector, but also leads research in artificial automation and intelligence. If you ask local authorities, they have no doubts: Odense not only wants to be a reference in robotics, but to become the best city in the world for the development of robots. “Odense is already the world center of collaborative robots, but we dream of making Odense the best robotic city in the world,” They claim from the local government. It is not just a motto: it is a strategy that is already underway. One of the pillars of this plan is to develop a robotics campus, where startups, large companies and the University of Southern Denmark share research and ideas. This space should serve as an innovation core, facilitating direct contact between Emerging talent and consolidated companies. The goal is to reinforce the network that already exists between the cluster companies and make the city even more attractive to foreign investment. Odense is betting strong, but we still have to see if the play works The city has made a clear commitment: it wants robotics to be its new flagship industry. It has a well -defined strategy, investment in progress and a network of companies that is already working. But the most difficult part remains: to turn this ecosystem into a long -term sustainable model. Odense is not competing alone. Globally, robotics has become a technological career in which only a few actors can consolidate. China, with its ambition to lead world automation, is investing billions in cities such as Shenzhen and Hangzhouwhere industrial and large -scale service robots are developing, And where firms such as Unitree stand out, which seek to replicate the success that Xiaomi achieved in the mobile sector. Its domain in the manufacture of robots not only represents a technological threat to the United States, It is also generating a battle for hegemony in the robotics industry. Silicon Valley, meanwhile, remains one of … Read more

Gemini Robotics is his plan for robots to act in the real world

Robotics and artificial intelligence (AI) go hand in hand. It would be useless to develop humanoid robots capable of lifting tons, with state -of -the -art sensors, if we did not have an intelligent system that would allow them to interpret the environment and act accordingly. Without ia, a modern robot would be little more than a lot of sophisticated but useless hardware. They are the Advanced algorithms Those who transform that gross power into machines capable of learning, optimize their performance and respond autonomously to the challenges that are presented to them. From Asimothe iconic Honda robot of the 2000 Sophia, Tesla optimus either FigureAI has made its way in humanoid robotics. However, we are still far from seeing machines that really match the versatility of the human body. As advanced as they are, they still have trouble moving in un controlled environments and manipulating everyday objects can be a real challenge. Gemini Robotics: Google’s bet to take AI to the physical world Meanwhile, in the digital world, AI advances at a completely different rate. He is already able to hold conversations very close to those of a person, overcome exams with surprising scores and solve complex problems with a speed that until a few years ago seemed science fiction. A contrast that makes it clear that, although artificial intelligence progresses by leaps and bounds, there is still much Way to go in its integration with robotics. These challenges are leading to a new generation of AI models specifically for this discipline. Google, as expected, does not want to be left behind and already works on solutions that promise to take humanoid robots one step further. His bet goes through Gemini 2.0, that now has two versions designed to improve the interaction and control of these machines. On the one hand, Gemini Robotics It focuses on vision, language and action (VLA), which allows you to take direct control of robots and improve your response capacity in dynamic environments. On the other, Gemini Robotics-Er It is designed for robotics experts, giving them the necessary tools to develop and execute their own programs with advanced reasoning skills. Gemini Robotics-Er stands out in spatial reasoning with detection and signaling of 3D objects Google has identified three essential qualities that, as they explain, must have robots to be really useful for people. Generality. A good robot should not only execute predefined tasks, but also adapt to unpublished situations and solve problems on the march. It must be able to function in new environments, handle unknown objects and interpret varied instructions without depending on prior training. According to internal tests, its performance in unforeseen tasks far duplicates that of other vision-language models-last generation. Interactivity In a world in constant change, robots must be able to communicate naturally and respond to real -time instructions. Gemini Robotics includes commands in everyday language and in multiple languages, adapting their behavior according to conversation or environment. In addition, it continually monitors what happens around it and adjusts its actions based on new orders or changes in the stage. Skill. Many tasks that humans perform effortlessly require extremely precise motor skills, something that most robots have not yet managed to dominate. Gemini Robotics, however, is capable of performing complex tasks of several steps that require a thorough manipulation, such as folding Origami or packing a snack in a Ziploc bag, demonstrating a higher level of skill. Gemini Robotics not only stands out in the resolution of unforeseen tasks, but its generalization capacity far as the performance of other vision-language-action models. According to Google’s technical reportis able to adapt to unpublished scenarios and make decisions without prior training, bringing robots to real autonomy. In addition, it has been designed to function with different types of robots. Although he trained mainly with Aloha 2, a two -arms platform, he has also proven to control systems such as arms Frankaused in laboratories, and even more advanced humanoids such as Apollo, developed by Apptronik. Its flexibility makes it a model adaptable to various applications, from industry to assistance. For now, there is no scheduled date for a generalized deployment of Gemini Robotics or Gemini Robotics-Er. Technology is still developing and, for the moment, only a small group of companies is having access to these tools. Google Deepmind is collaborating with Apptronik in the construction of the next generation of humanoid robots, exploring how to integrate these models of AI in more advanced systems. In addition, some trusted tests, such as Agile Robots, Agility Robotics, Boston Dynamics and Enchanted Toolsthey are already testing Gemini Robotics-Er, although it is not clear if that access will be expanded in the future. Meanwhile, Google Deepmind continues to work on new security frames and benchmarks to evaluate the possible risks of AI in physical environments. All this makes it clear that, although the project progresses, there is still a long way before this technology reaches the general public. Images | Google Deepmind In Xataka | Faced with an AI that says yes to everything, a concern: this will never create an Einstein or a Newton

The next generative AI revolution will not reason better, but integrate into physical robots. And will change robotics forever

In the technological world we are fascinated with chatbots who write essays and take their time reasoning. Grok 3 goes, Claude 3.7 It comes, meanwhile, something less visible but deeper is happening: The beginning of the merger between conversational and mechanical bodies. For the first time, robots not only execute preprogrammed instructions. Now also, in their own way, they understand. Historically, robotics and AI have followed separate paths. Parallel, but separate. Industrial robots were as accurate as stupid. AI systems are intelligent, but incorporeal. Let’s think about robotic arms that have existed on assembly lines for decades. Millimetrically exact, but absolutely lost If a single component appeared in a position slightly different from that expected. The new generation of robots connected to LLMS You can now interpret ambiguous instructions, such as “bring me something for thirst”, and solve the problem through reasoning (word of the year), evaluating which drinks are available, if the user showed preference for some and even if there is ice in the freezer. We no longer program specific movements, but rather general objectives. Figure robots are good examples. So good that even They work autonomously at a BMW factory. According to The company has just publishedthey can even receive generic verbal instructions, such as collecting pieces, and without the need for previous specific programming are able to visually analyze the environment and detect them. They can even pause, reassess the situation and correct the error if someone modifies the pieces. This contextual adaptation capacity was unthinkable a couple of years ago. The really groundbreaking of this impaired in robots is that You can learn very differently. The LLMS trained with text lack the physical understanding of the world. Traditional robots lack contextual intuition. By merging them an intelligence that includes both semantics and physics. A robot equipped with LLMS is not only able to understand the instruction “opens that box without damaging its content”, but can improvise before boxes never seen, evaluating materials, closures and fragilities. The revolution, unfortunately, will not be spectacular as in The novelty-fictionbut it will arrive in the form of robotic arms in factories that can be reconfigured with a verbal order. Or warehouse robots that will understand contextual priorities. Or medical assistants capable of interpreting non -verbalized needs of their patients. Boston Dynamics, the Non-Va-Más de la Robotica during this last decade thanks to your robots jumping and doing Parkourshe is no longer as interested in acrobatics as in integrating understanding systems that allow her machines Understand complex instructions in construction and industry environments. You just have to see Your website. And on the horizon look The Tesla optimus or the Cyberone de Xiaomi. Or unitree like One of the great Chinese technological bets. The big change will come When these systems stop failing before the unforeseen and begin to apply general principles of physical and contextual reasoning. We are not seeing the birth of artificial consciousness, but the understanding of the physical world and the world of meaning in a single integrated system. What this powerful convergence does is its silent nature. He catches us arguing On whether Grok 3 deserves a better product or about itself Chatgpt 4.5 It will be sufficient during the remaining of the year, but Robots are beginning to understand the world like us. Not only by calculating a trajectory, but understanding intentions, contexts and meanings. That is much more transformative and valuable than Any ten -page trial generated in seven minutes. In Xataka | Deep Research is not just a new AI function. It is the beginning of the end of intellectual work as we know it Outstanding image | Figure, Ryunosuke Kikuno in Unspash

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.