They have kidnapped agents from Anthropic, Google and Microsoft for the sake of science. The three companies ended up paying

In some development teams it is already becoming common to rely on artificial intelligence agents to review incidents, analyze code changes and move through tasks that were previously left in human hands. The problem appears when these systems not only read information that may come from outside, but also operate in spaces where they coexist. sensitive keys, tokens and permissions. That is what recent research puts on the table: we are not simply facing a useful tool that can make mistakes, but rather an architecture that can also become dangerous if it is deployed without very clear limits. The alarm has been turned on Aonan Guan and Johns Hopkins researchers Zhengyu Liu and Gavin Zhong after demonstrating attacks against three agents deployed on the aforementioned platform: Claude Code Security Review, from Anthropic, Gemini CLI Action, from Google, and GitHub Copilot Agent, a GitHub tool under Microsoft. According to your documentation, The failures were communicated in a coordinated manner and ended in financial rewards paid by the companies, but what is relevant is that they point to a broader problem. This is how they managed to twist the agents from within The name that Guan gives to the discovery helps a lot to understand what this is all about: “Comment and Control.” The idea is simple to explain, although the substance is not so simple. Instead of setting up an external infrastructure to direct the attack, GitHub itself acts as an entry and exit channel: the attacker leave the instruction in a titlean incident or a comment, the agent processes it as if it were part of normal work and the result ends up reappearing within that same environment. Everything stays at home, and that is precisely the key to the problem. And that “everything stays at home” is not a minor detail, but the basis of what the research describes. The three agents share a very similar logic: they read normal content from GitHub, incorporate it as a work context, and from there, execute actions within automated flows. The clash appears because that same space not only contains text sent by third parties, but also tools, permissions and secrets that the agent needs to operate. The first case Guan details concerns Claude Code Security Review, an Anthropic GitHub action designed to review code changes and look for possible security flaws. Up to this point, everything is within what was expected. The problem, as the researcher explains, is that it was enough to introduce malicious instructions in the title of a pull requestwhich is the request that someone sends to propose changes to a project, so that the agent will execute commands and return the result as if it were part of your review. The team then managed to go a step further and demonstrate that it could also extract credentials from the environment. The interesting thing is that the same scheme also appeared in the other two services, although with nuances. At Google, Gemini CLI Action could be pushed to reveal the GEMINI_API_KEY from instructions snuck into an issue and its comments; In GitHub Copilot Agent, the variant was even more worrying, because the attack was hidden in an HTML comment that a person did not see on the screen, but the agent did process when another person assigned it to the case. In both scenarios, the background was the same again: apparently normal content that ended up twisting the behavior of the system until exposing credentials or sensitive information within GitHub itself. Guan assures that the pattern made it possible to leak API keys, GitHub tokens and other secrets exposed in the environment where the agent ran, that is, just the credentials that can later open the door to much more delicate actions. Who does this affect? Especially to repositories that run agents in GitHub Actions on content sent by untrustworthy collaborators and, in addition, give them access to secrets or powerful tools. The researcher himself clarifies that the risk depends a lot on the configuration: by default GitHub does not expose secrets to pull requests from forksbut there are deployments that open that door. And here another layer of the matter appears, less technical but just as important. As published by The RegisterAnthropic, Google, and GitHub ended up paying bounties for the findings, but none of the three had published public notices or assigned CVE at the time of that information. Guan was quite clear about this: he said he knew “for certain” that some users were still stuck on vulnerable versions and warned that, without visible communication, many may never know that they were exposed or even being attacked. So although there were mitigations and changes in documentation or in the internal treatment of reports, there was no equivalent public notice for all those potentially affected. Anthropic settled the case on November 25, 2025 and paid $100 Google rewarded the discovery on January 20, 2026 with $1,337 GitHub closed the case on March 9, 2026 with a payment of $500 What makes this case especially delicate is that GitHub does not seem like the end of the road, but rather the first visible showcase. Guan argues that the same pattern can probably be reproduced in other agents who work with tools and secrets within automatic flows, and there he mentions from Slack-connected bots to Jira agentsmail or deployment automation. The logic is the same again: if the system has to read external content to do its job and also has enough access to act, the field is fertile for someone to try to twist it from within. The conclusion that Guan reaches is not about selling a magic solution, but about returning to a fairly classic idea in security: giving each system only what is essential to do its job. If an agent reviews code, they shouldn’t have access to tools or secrets they don’t need; If you’re just summarizing issues, it wouldn’t make sense for you to write to GitHub or touch sensitive credentials. That … Read more

That Alibaba creates its own chip for AI agents is no surprise. Let it be neither ARM nor x86, but 5nm RISC-V, yes

The Chinese giant Alibaba just announced the launch of its new high-end CPU, the XuanTie C950 processor. Developed by and for AI agents, it is a five-nanometer chip with a speed of 3.2 GHz whose surprise is not in any of these figures. The surprise is in its architecture, which is neither x86 nor ARM, but RISC-V. Therefore, it is not only the most powerful RISC-V processor created to date, but also a declaration of intent that can be summarized in two words: technological sovereignty. What is this chip about?. XuanTie CPUs are developed by Damo Academy, Alibaba’s research division. The previous model, the XuanTie C930, was announced on March 10 as the first server grade processor developed by Alibaba. Just two weeks later, the Chinese company has announced a new chip, the XuanTie C950, which is, according to the firm, three times more powerful than its predecessor (the C920 announced in 2024). Alibaba has not revealed which factory produced it, but it is based on the RISC-V architecturethat its process is five nanometers and that its speed amounts to 3.2 GHz. This launch occurs in a very particular context. Just a few days ago, and in response to the rapid adoption of OpenClaw by local companies, Alibaba Wukong announced.its platform for deploying AI agents in enterprise environments. This chip aims to improve the inference. In other words, the XuanTie C950 will serve to improve the computational process carried out by the language models in order to generate the responses that correspond to the requests they receive. In a context of agents working with files, data, and diverse environments, this is important. Processor prototype based on RISC-V architecture | Image: Wikimedia Commons Why RISC-V? Mainly, because unlike x86 and ARM, RISC-V is open and its use does not imply paying for licenses. According to Alibaba, “RISC-V’s open standard nature allows chip designers to customize instruction sets and accelerate specific AI workloads with little or no licensing costs. This is especially important for the development of AI agents.” Let’s think of RISC-V as what Linux is to Windows and Mac. If a company wants to use x86 (Intel and AMD) or ARM (SoftBank) architectures, it must pay a license. Not only that, but x86 and ARM are exposed to possible restrictions by the United States. With RISC-V, this risk disappears, which is why so much China like the European Union have found in it an escape valve towards sovereignty and technological independence. The surprising thing. That a Chinese company has managed to produce a five-nanometer chip is, to say the least, striking. To manufacture these processors it is necessary to use deep ultraviolet lithography (UVP) and, normally, machinery produced by the Dutch ASML. We know that SMIC (Semiconductor Manufacturing International Corp), the largest Chinese semiconductor manufacturer, had been at least since 2023 developing its own five-nanometer lithography, but with unacceptable results. When a chip wafer is manufactured, it is normal for some of its cores to malfunction. If we talk about profitability, the yield per wafer must be 70%that is, seven out of every ten cores produced work. In the year 2025, the yield of SMIC wafers was at 30%. That today, at the beginning of 2026, we see a five-nanometer chip produced, a priori, in China, would be a punch on the table by the Asian country and a strong sprint in the AI ​​race. However, it does not seem feasible. The other option, and perhaps the most plausible, is that it is not manufactured by SMIC, but by TSMC. SMIC has not managed to manufacture five-nanometer chips using the multiple patterning on your ASML UVP machines. The Taiwanese TSMC does have that capacity and, according to Nikkei Asiawill be the one who manufactures it. Be that as it may, it is a great step for the RISC-V architecture, which has gone from being relegated to small devices to reaching the league of the big ones. Featured image | Alibaba In Xataka | There is a city in China that goes head to head with Silicon Valley: welcome to Hangzhou, the home of the ‘Six Little Dragons’

a system governed by AI agents

The way we use mobile apps could be entering a new stage. Until now, the Android experience has been based on something very simple: opening applications and performing step-by-step actions within them. However, Google is exploring a different model, in which artificial intelligence acts as an intermediate layer between what we ask for and what apps can do. In that scenario, we won’t always be the ones scrolling through menus or completing processes manually. In many cases, it will be enough to express what we want to do so that the system will try to solve it for us, coordinating different phone functions. The next step in Android. In a post on the official developer blogthe company presents new capabilities designed so that applications can work directly with assistants and AI systems. These functions are designed so that tools like Gemini can discover and execute certain actions within some apps. The project is still in an early phase, but it suggests a very specific direction: begin to reconfigure Android as an environment in which artificial intelligence can help complete tasks. What do we understand by agent. In the field of AI, an agent is a designed system to move from response to action. While early digital assistants functioned as consultation tools, agents attempt to understand an intention and plan how to carry it out. To do so, they combine several capabilities: understanding natural language, evaluating the context and deciding what steps are necessary to fulfill a request. It is not just about generating text or suggestions, but about organizing a small chain of decisions oriented towards a specific objective. If we follow the reasoning that Google presents in its publication, the change does not only affect AI, but also how applications are conceived within Android. For years, the main objective of any app was to get the user to open it and complete all the necessary actions within it. However, now that criterion is beginning to shift. In this new scenario, success begins to be measured less by getting us to open an app and more by its ability to help complete a task, even when the user does not directly interact with its entire interface. One of the first pieces of change. The first path that Google proposes to move in this direction goes through something it calls AppFunctions. It is not a user-visible function as such, but rather a set of tools with which developers can expose functions and data of their apps to intelligent assistants such as Gemini. The example mentioned by the Android blog itself is quite illustrative: on the recently introduced Galaxy S26 seriesGemini can access Samsung Gallery features to locate specific photos based on a natural language request, such as asking to show images of a pet. In that case, the assistant interprets the request, activates the corresponding Samsung Gallery function and returns the result without requiring the user to manually navigate through the gallery. The other way of Google. Along with direct integrations, the company is preparing a second formula to extend this model to more applications. As he explains, it is an interface automation system that will allow Gemini to take care of generic multi-step tasks without depending on a specific connection between the app and the assistant. Instead of relying on a function previously exposed by the application, the AI ​​acts directly on the interface. Google notes that this initial preview will be tested on the Galaxy S26 series and some Pixel 10within the Gemini app and with a limited selection of delivery, grocery and transportation applications in the United States and Korea. The company also ensures that the user will be able to follow the process through notifications or a live view, resume manual control at any time and receive notifications before sensitive actions, such as a purchase. Looking to the future. If Google’s announcement makes anything clear, it is that Android is beginning to prepare for a different stage. The functions presented are still in development and their deployment will be gradual, but they point to a specific direction: an operating system in which artificial intelligence plays an increasingly active role in the way we perform daily actions on mobile phones. Pixel and Samsung appear for now as the most visible references, although Google suggests that it wants to bring these capabilities to more manufacturers as the ecosystem evolves. As is often the case with these types of changes, the final result will depend on how the tools, integrations and the response of the users themselves evolve. Images | Google In Xataka | The iPhone has been a “made in China” phone for decades. Now it is changing countries at full speed: India

Meta just bought one designed for AI agents

If we look back, the history of social networks is deeply linked to a very specific idea: connecting people. For years, platforms like Facebook were presented as places to keep in touch with friends, family or co-workers. That logic is still present, but the panorama is beginning to incorporate new actors. Meta has confirmed the acquisition of Moltbook, a platform created for artificial intelligence agents to interact with each other within a social network-like environment. The purchase. We are facing an agreement that does not go unnoticed. As part of the transaction, Moltbook creators Matt Schlicht and Ben Parr will join Meta Superintelligence Labs, the AI ​​unit led by Alexandr Wang, former CEO of Scale AI. The company has not revealed the economic conditions of the operation, but a spokesperson told TechCrunch That the arrival of new talent opens new avenues for AI agents to work for people and companies, and their approach to connecting agents represents a novel step in a rapidly evolving space. A social network for agents. What differentiated Moltbook from other platforms was precisely its approach. Instead of focusing on human profiles, the site allowed AI agents to post messages and interact with each other within a forum-like format. Many of these agents used OpenClawa tool that connects models like Claude, ChatGPT, Gemini or Grok with common messaging applications, including iMessage, Discord, Slack or WhatsApp. That combination turned Moltbook into a very striking experiment within the technological world, to the point of leaving the most specialized circle. An experiment with risks. The rapid popularity of Moltbook also exposed some major problems. Security researchers discovered that the platform had flaws that allowed human users to impersonate AI agents and publish messages as if they were autonomous systems, so that the environment designed for interaction between agents was not as solid as it seemed. Wiz also detected a vulnerability that exposed private messages, more than 6,000 email addresses, and more than one million credentials. open question. All this leaves an open question that still does not have a clear answer: how will Meta leverage this purchase in its artificial intelligence strategy. While there are clues, he has not explained how exactly he plans to use this project within his products or research. What we do know is that the operation comes at a time when large technology companies are competing for talent, tools and new ideas around autonomous agents. Images | Dima Solomin | Moltbook In Xataka | OpenAI is hitting the brakes with Stargate. The reason: Oracle builds yesterday’s data centers with tomorrow’s debt

US agents denounce that it is failing in a key point

Social networks have been using automated systems for years to try to detect some of the most serious crimes that circulate on the internet. Among them is child sexual exploitation, a phenomenon that forces platforms, regulators and security forces to monitor enormous volumes of content every day. The promise of these tools is clear: identify potential cases sooner and make the work of agents easier. However, some specialized teams in the United States maintain that the volume of notices they receive from Meta platforms has skyrocketed and that a significant portion of them do not provide useful information for action. Clash between scale and utility. In a lawsuit underway in New Mexico, prosecutors maintain that Meta did not adequately disclose what it knew about the risks minors face on its platforms and that it violated state consumer protection laws. According to the Associated Pressthe indictment also argues that the company presented the safety of its services in a way that did not correspond to the risks faced by children and adolescents. The case is part of a broader wave of lawsuits filed in the United States against large technology companies for the effects their services may have on minors. Meta rejects that interpretation. In his speech before the jury, the company’s lawyer Kevin Huff defended that the company has reported the risks associated with the use of its services and that it has introduced different tools to detect and eliminate harmful content. According to the Associated Press, Huff insisted that the central point of the case is not to prove that problematic content exists on social networks, but rather to determine whether the company hid relevant information from users. Researchers on the front line. Those who have provided figures and concrete examples of this problem are agents who work directly in investigations of child exploitation on the Internet. In the United States, those tasks fall largely to the network of units known as Internet Crimes Against Children (ICAC), a program that brings together police forces at different levels and is coordinated with the Department of Justice to investigate and prosecute crimes committed against minors in digital environments. Its agents receive notices about possible cases from different sources, including the technology platforms themselves. During the trial, some of these agents have described how they are experiencing the increase in ads from Meta platforms. Benjamin Zwiebel, ICAC special agent in New Mexico, explained in court that many of the notices they receive are of little use in advancing an investigation. “We get a lot of advice from Meta that is just garbage,” he declared, according to The Guardian. His words reflect a broader concern within these units: the volume of alerts has skyrocketed, but not all of them contain the information necessary to identify a suspect or initiate police action. Poor quality. In some cases, reports sent from the platforms include data that does not describe criminal conduct. In others, they do point to a possible crime, but they arrive without essential elements to continue the investigation, such as images, videos or fragments of conversations that allow those responsible to be identified. Without this material, agents have few tools to advance the case or request new proceedings. Some agents have also noted that a portion of these notices arrive with incomplete or partially removed information. The mass reporting machinery. Behind this increase in notices there are several factors that help to understand why the volume of reports sent to the authorities has skyrocketed. In the United States, technology companies are required by law to report any child sexual abuse material they detect on their services to the National Center for Missing & Exploited Children (NCMEC), an organization that acts as a national center for receiving these notices and subsequently distributes them to the corresponding police forces. Agents cited by The Guardian also point to recent legal changes, such as the Report Act, which came into force in November 2024, as a possible factor that would have increased the number of notices sent to avoid non-compliance. Meta says he’s doing the opposite.. The company rejects the idea that its systems are making the work of the authorities more difficult and maintains that, on the contrary, it has been collaborating for years with security forces to detect and prosecute this type of crime. A Meta spokesperson stated that the United States Department of Justice has recognized on several occasions the speed with which the company responds to requests from authorities and that NCMEC has positively evaluated its notice notification system. According to the company, in 2024 it received more than 9,000 emergency requests from US authorities and resolved them in an average time of 67 minutes, a process that, it claims, is accelerated even more when it comes to cases related to child safety or the risk of suicide. Meta also notes that it reports to NCMEC any material that may be linked to child sexual exploitation and that it works with that organization to help prioritize the notices, including by labeling those it considers most urgent. a real problem. Regardless of what the jury in New Mexico determines, the case reflects a tension that goes beyond a single company or a single state. Digital platforms operate on a global scale and use automated systems to detect illicit content in volumes that would be impossible to review manually. However, the experience described by some agents shows that increasing the number of tips does not always translate into more effective investigations. Images | Dima Solomin | ROBIN WORRALL In Xataka | Dario Amodei founded Anthropic because OpenAI didn’t take the risks of AI seriously. Now you are going to give in to those risks

AI agents have indeed changed work and the economy forever. But for now only in one sector: programming

AI agents are beginning to demonstrate their capabilities, but the only area in which they do so is programming. An Anthropic report reveals how software engineering is where half of the activity of AI agents is currently concentrated, and that proves two things. The first, that AI can effectively enhance work. The second, that there is a huge opportunity for hundreds of verticals where AI has barely landed. what has happened. If there is a sector that has embraced AI and AI agents, it is programming. Platforms like Cursor or WindSurf first and like Claude Code, OpenAI Codex or Antigravity today have made all kinds of people —whether they know programming or not— can turn their projects into reality in a really simple way. It’s a clear case of how AI can contribute to a field, but there’s a problem: it’s practically the only case where it has actually done so. Distribution of requests to AI tools by segment. Software engineering is almost responsible for 50% of those calls or requests, at least in the case of the Claude platform. Source: Anthropic. Verticals with a lot of margin. As can be seen in this graph, the presence of AI agents is very reduced or practically non-existent in a large number of verticals in which it is evident that there is a notable opportunity to take advantage of these tools. The automation of office tasks is the second main protagonist with 9.1% of the function calls of the Anthropic AI model in this report. Below it we find segments such as marketing, sales, finance, business analysis or scientific research. And others who are ignoring AI. There are quite a few sectors in which AI agents seem to be barely present. The travel, legal, medical, e-commerce or education segments seem perfect to start taking advantage of these tools, but at the moment this is not the case and this presence is very, very small in all of them. Claude Code can work longer and longer. Double what it was three months ago, in fact. Source: Anthropic. Models can now work autonomously for a long time. In these scenarios it is true that the models used to be limited by the time they could function autonomously and “chain” actions and self-analyze progress to continue acting. That’s not so true now. Claude Code, for example, has doubled the time of his longest sessions in just three months: from 25 minutes in October 2025 to 45 minutes in January 2026. And they need less human intervention. Another of the revealing data of the study is that the evolution of these agents not only means that they can function autonomously for longer periods of time, but that this also implies fewer human interventions. Those situations in which an agent “needs human help” to continue with the process are becoming limited. In August 2025, the average was 5.4 human interventions per session. In December that average dropped to 3.3 interventions. We trust more and more in AI. At Anthropic they have also noticed a unique behavior among users: they are increasingly trusting AI agents. In programming, novices approve each new step before it is executed, but veterans delegate and intervene when something goes wrong: they have gone from pre-approving everything to exercising active and constant monitoring. As they say at Anthropic“Users develop confidence as they work with the model, and change their monitoring strategy based on that growing confidence.” From programming to other fields. What has happened with programming could happen in other scenarios. The challenge is to build AI agents that adapt to each segment using that specific data from said vertical. If an AI wants to help in the legal segment, it must be specifically trained for that segment. What the AI ​​did when trained with thousands of code repositories on GitHub It was learning and improving. Well, the same can be applied to other verticals, although the challenge is certainly notable because programming was a perfect segment for the application of AI: it is very deterministic. It either works or it doesn’t, and whether it does or not, execution logs allow you to fine-tune that operation. The new unicorns await. As entrepreneur Garry Tan points out in your newsletterin the last two decades SaaS platforms have managed to capture 40% of venture capital investments and that industry has more than 170 unicorns. “The thesis is simple,” Tan concludes, “all of those unicorns have an equivalent in the form of vertical AI waiting.” Promises and realities. The AI ​​agent segment therefore promises many changes in a multitude of segments, but the reality is that today the practical success (there is no economic success at the moment) of AI is limited to the world of programming. Will we be able to transfer it to other segments? The opportunity is there, but it is one thing to say it, and quite another to do it… even if it is with AI. Image | Joshua Reddekopp In Xataka | Every time Facebook had a competitor, it bought it: it is exactly the same thing that OpenAI is doing

The best AI agents that are faster and easier to use to do tasks for you without complications or long installations

Let’s tell you the best fast and easy AI agents to use, without complicated installations and configurations. This type of AI agent They are less complete and powerful than the more complete and advanced ones, but they allow you to explore how the artificial intelligence can do tasks for you. We are going to make a small list to stick to the best alternatives. Many are quite popular, others are more unknown, and we even ended up with an open source alternative for privacy lovers. Claude Cowork Claude Cowork It is possibly the best and simplest tool to test the benefits of a medicine, but in a controlled way. It is a paid feature that you can use within the desktop application of Claude. The price to use it starts at 15 euros per month. Claude Cowork allows Claude’s AI to manage files and use applications on your computer. You tell him what you want, and Claude will find the best way to do it. Also, if you install the extension Claude in Chrome in your browser, Cowork will also be able to do things for you in the browser. Perplexity Comet Comet is the browser with artificial intelligence Perplexitya platform that started as a search engine based on artificial intelligence, and now it is much more. It is now a chatbot that allows you to use various artificial intelligence models, such as Gemini, GPT or Claude. The Comet browser has the peculiarity that can use AI to do tasks for yousuch as browsing you, interacting with websites, automating tasks, searching and filtering information, managing workflows and other tasks such as comparing prices on multiple pages. Manus on Telegram Manus is an autonomous AI agent, to which you can give a high-level objective and it works on its own to achieve it. Tasks are asynchronous, so you can ask it to do something, turn off the computer, and receive a notification when the work is completed. Manus also has the ability to used in Telegram chats like a bot With this, you will be able to use Manus directly from the messaging app and without entering its official website or application, and then you will be able to access them to see the result of AI research, web development, design, whatever you have asked. ChatGPT Agent ChatGPT also has an agent mode in your application. With it, you will be able to interact directly on web pages, ChatGPT will act on your behalf to book appointments, create presentations and perform other complex tasks. Of course, to use it you will need have a paid subscription in AI. Genspark This platform is a kind of all-in-one AI worksspace. It is not exactly a chatbot but acts in a similar way to the concept of an agent, planning taskschoosing the correct tools to do it, and chaining the steps autonomously. With this tool you will be able to create applications, documents, designs, images, music, spreadsheets and more. It has a free plan with limited access, although you will have to pay to access everything. Also has more than 80 toolsand eight language models of different sizes, each for a task. AgentGPT This was one of the first services to make AI agents accessible from the browser without having to install anything. It works similar to the previous ones, you have to write what you want with natural language, the agent divides this into subtasks, and then executes them autonomously. Kuse Cowork Kuse is an open source alternative to use an agent capable of helping you perform tasks on your computer. It can generate documents and presentations, transform d oc files, PDFs, you can also create mind maps, interact with YouTube videos and more. It is therefore an open alternative to Claude Cowork, where you can decide which AI models to use attaching them with their API, or even installing them directly on your computer. In Xataka Basics | How to create a Telegram bot that sends you a summary made by Gemini of each email you receive in Gmail and other emails

TIA agents are better ambassadors for the CSIC than we suspected

If we think about Mortadelo and Filemónwe also immediately think of all the outrages that the TIA agents have to suffer because of the inventions of Professor Bacterio, the translation into the Carpetovetonic language of the iconic mad doctor which is a foundational part of the science fiction imagination. But there is more: a traveling exhibition traces the history of science in the last half century through the creations of Ibáñez. What does it consist of? The Higher Council for Scientific Research has premiered the exhibition ‘The science of Mortadelo and Filemón‘, which will remain open until February 15 before beginning its tour of various Spanish cities. The exhibition brings together 39 covers published between 1975 and 2018, organized into five thematic blocks that examine everything from Bacterio’s chaotic inventions to climate crises and epidemics. Pura Fernández, vice president of Scientific Culture of the CSIC, highlights in ‘El País’ that Ibáñez turned research into an everyday occurrence through humor. The sections. The exhibition structures its 39 covers into five thematic blocks that document the evolution of Spanish scientific thought and that link to CSIC research through QR codes for visitors: ‘A world in motion under the magnifying glass of science’ examines natural phenomena: from glacial retreat to epidemiological crises, including agricultural innovations. ‘Technological innovations incorporated by the TIA’ satirizes inventions that generate more chaos than solutions, questioning whether technology responds to real needs or commercial impulses. Professor Bacterio stars in his own section as the archetype of the researcher isolated from the world: in ‘Bacterio’s laboratory, successes and accidents’ his failed experiments raise dilemmas about ethics and safety in laboratories. ‘Science in the social mirror’ addresses information manipulation, pseudoscience and responsible communication. ‘Emergency science for troubled times’ talks about climate change, air pollution, invasive species such as the tiger mosquito, and Saharan dust intrusions. How it works. Francisco Ibáñez built a visual archive of Spanish scientific development over six decades. What began in 1958 as detective adventures evolved into a satirical chronicle of Spainwhich included technological modernization. Starting in the seventies, with Spain in full transformation, its covers captured real milestones: the takeoff of the space race in ‘El cocoa spatial’, genetic engineering in ‘The people copying machine’ or the phenomenon of drones in ‘Drones matones’, until reaching the climate alerts of the 21st century. His method was far from the anticipatory rigor of Franco-Belgian comic icons such as Hergé (who consulted the zoologist Bernard Heuvelmans and the astronautics expert Alexandre Ananoff in the Tintin album ‘Target: The Moon’) or the historical accuracy of Goscinny in Asterix. His territory was immediate parody: he transformed scientific headlines into slapstick visual, turning Bacterio’s laboratory into a distorting mirror of contemporary research. The CSIC and pop culture. The public body trusted for years in Spanish graphic humor to democratize knowledge. Fernando del Blanco, head of the library of the CSIC Research and Development Center, inaugurated ‘Science according to Forges’ in 2019, bringing together 66 cartoons by the cartoonist published in ‘El País’ between 1995 and 2018. With this one by Mortadelo he shared a methodology: transforming recognizable cultural figures into bridges to complex scientific concepts. Humor allows us to address everything from the Higgs boson to budget cuts in science. Science versus parody. As Pura Fernández comments in the aforementioned ‘El País’ article, Mortadelo and Filemón manage to discredit practices without delegitimizing the need for knowledge. Bacterio embodies a poor application of science: isolation, lack of peer review, continuous risks… However, his inventions address real phenomena. In this way, he emphasizes, the public understands the reading that Ibáñez proposes: Bacterio satirizes malpractice, not science itself. In Xataka | When Ibáñez lost the rights to Mortadelo in 1985, he created a new magazine where they would have another name: ‘Yo y yo’

AI companies promised to be happy with their autonomous agents, until they came across Amazon

AI agents promise us to perform complex tasks autonomously, such as book trips either make the purchase. Although is improvingagentic AI still it’s quite greenbut it has just come across an obstacle that we had not counted on and that could change everything: that there are companies that do not want AI agents roaming their stores. This is what just happened between Amazon and Perplexity. What has happened? They tell it in Bloomberg. Amazon is suing Perplexity to stop the agent built into its Comet browser from purchasing items from Amazon. According to Amazon, Perplexity has committed computer fraud by allowing its agent to browse and make purchases as if they were a real person, which violates its terms of service on transparency. They also claim that the use of automated agents can negatively affect the shopping experience on their platform. Why is it important. The case could set limits for autonomous AI agents in real-world tasks that require using third-party services, such as in this case Amazon. If stores or travel platforms close the door to AI agents, the promise of autonomy is compromised. On the other hand, leaving all doors open could influence e-commerce. It is something that has already happened before, such as cases of bots buying tickets to shows. Bullies. Perplexity has responded with a post on your blog in which they describe the move as “corporate bullying” and affirm that it is “a threat to all Internet users.” They also highlight that Comet users love the agentic AI features and that Amazon should too because it translates into more purchases and happy customers. For the company, an AI agent should have the same rights and responsibilities as a real human user since the agent is acting on behalf of the user. “It’s not Amazon’s job to oversee that,” Aravind Srinivas, CEO of Perplexity, said in an interview. Agents on Amazon. Amazon already has its own assistant Rufus and is developing its own agents, so there are more reasons behind this movement against Perplexity. It is not about protecting the experience, or at least not only about that, but that Perplexity is a direct competitor. Perplexity champions choice. “I don’t think it’s customer-centric to force people to only use their assistant, who may not even be the best shopping assistant,” Srinivas said. AI Ecosystems. The dispute between Amazon and Perplexity is the first example that the AI ​​war is also about ecosystems. It presents a scenario in which service providers decide whether an AI agent can enter their stores or travel platforms, or if they prefer to develop their own and force users to use that. The truth is that Amazon had already blocked the Perplexity agent a few months ago, but the company released an update that circumvented the blocking. We’ll see how everything turns out. Image | Pxhere In Xataka | CAPTCHAs had become an excellent tool to fight bots. Until ChatGPT Agent arrived

The new trend in AI is “AI agents.” The only problem is that almost no one is clear about what they are.

It is not the first time that a word becomes fashionable in the technology sector. It has happened with IoT, Big data, Blockchain and even 5G. In English they call it buzzword and refers to those terms that are repeated and repeated until they almost lose their meaning. It has happened with AI and, now that we have overcome that first stage, it was time to give it a surname. The chosen one is agentic AI and, suddenly everything is agentic AI. I experienced it a couple of weeks ago in the Qualcomm Snapdragon Summit. During the different conferences, the most repeated words were “agents” and “agentic”. The problem is that they didn’t show any real products that actually fit this definition. They are not the only ones, there is a whole wave of companies that already call agentic AI literally anything that has minimal automation. Agents all the time everywhere Agentic AI It was going to be a revolution in 2025but reality ended lower the smoke to the gurus of the sector. With this I don’t mean that everything is a hoax, AI agents are very real and they are already here. We can try them If we have the ChatGPT Plus plan. At the development level, Anthropic allows create scheduling agents with Claude and Google with Gemini. Other platforms like Salesforce offer their own custom AI agents for specific sectors such as public or industrial. They are improving a lot, but the reality is that AI agents They are still very green as has been demonstrated in enough tests. There is no real product, not even one that they are developing, everything is part of a dream, one in which AI agents are even in the soup. Being cautious and waiting for technology to develop does not suit many companies. Returning to the case of Qualcomm, in the “The Ecosystem of You” conferenceits CEO Cristiano Amon drew us a future in which “the agent” does everything for us, absolutely everything: “The agent will understand our world and will be helping us, anticipating every need.” The problem is that everything he showed It was simply a demo. There is no real product, not even one that they are developing, everything is part of a dream, one in which AI agents are even in the soup. What is agentic AI It is also known as agentive AI, agential AI or simply “AI agents”. Google defines it as “an advanced form of artificial intelligence focused on autonomous decision-making and action.” For NVIDIA is an AI that “uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.” For amazon It is “an autonomous system that can act independently to achieve predetermined goals” and they add that, unlike generative AI, agential AI “is proactive and can perform complex tasks without constant human supervision.” It seems pretty clear, generative AI responds to one request at a time, while agentic AI can achieve more complex goals, making decisions autonomously. An AI agent must be able to collect information, use tools and solve problems to achieve the objective we have given it. They call it agentic AI because “AI with slight automation” doesn’t sound so good In another of the Snapdragon Summit conferences they showed us several products that were real, one of them is Page.aian AI assistant that works locally on mobile. During the intervention, the presenteror stopped repeating that the app had agentic functionswhen the most they showed was how the AI ​​was capable of organizing a barbecue: it created an event on the calendar and then invited a friend. What caught my attention is that the creator of the app did not use the word, but was the presenter. The reality is that many of the use cases presented as agents are, at best, a kind of IFTTT on steroids. In this CNBC articlethe head of AI at the consulting firm EY assured that “Many in the market want to take advantage of it. We have witnessed an incredible change of image of everything related to generative AI, which is now presented as agentic AI.” “Agent washing”: when products that are sold as agents are actually products that already existed. At the beginning of the year, Gartner surveyed more than 3,000 companies who promote AI agents and discovered a trend they call “agent washing.” That is to say, many products sold as agents are actually products that already existed. Gartner estimates that of the 3,000 companies, only 130 sell real AI agents. “Most agency AI proposals lack significant value or return on investment,” said analyst Anushree Verma. The firm predicts that more than 40% of agentic AI projects will be canceled before the end of 2027. Why so much hype? In May of this year, a survey of senior American executives revealed that 88% of companies had planned to increase their AI budget before the arrival of agents. Most respondents believed agentic AI was going to change workplaces more than the internet did, and nearly half were worried about competitors adopting AI agents before them. The fear of being left behind has encouraged many companies to jump into the pool without fully understanding what agentic AI is. It makes sense that they want to hype it up and even “cheat” by calling agents who really are not: they are investing a lot in this and they need it to turn out well. Image | Gemini In Xataka | A group of AI experts attended a party at a mansion. The topic of conversation: what will happen when AI ends humanity

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.