OpenClaw is the total AI agent that challenged Big Tech. Big Tech’s response: buy it, of course

Peter Steinberger It was a great unknown to the vast majority of the planet until less than a month ago. His project, which he initially called Clawdbot (later Moltbot and finally OpenClaw), became the new sensation of the internet and the world of AI. Its growth has been so spectacular that the majors in this segment set their eyes on it and, inevitably, began to fight to sign its creator and acquire his project. We already have a winner of that bid: OpenAI. What is OpenClaw. OpenClaw is what we could define as “the total AI agent.” A system that uses one or more AI models such as those from OpenAI, Anthropic or Google to do things for you. Here are some differences from using those models in a “traditional” way: You can chat with your AI agent using messaging apps like Telegram or WhatsApp, as if it were just another contact OpenClaw takes full control of the machine you install it on, whether it’s an old PC, a Raspberry Pi or a VPS, for example. You have permission to do whatever you want inside that machine, which also involves risks The capacity of current models, such as Opus 4.5, makes the agent certainly autonomous and proactive and, for example, suggests things to you or makes decisions based on the conversations you have with him? she? it? OpenAI buys OpenClaw. Last week Steinberger I already commented in an interview with Lex Fridman that OpenAI and Meta had made offers to sign him and acquire his project. Those intentions crystallized on Saturday, when the creator of OpenClaw advertisement that he had signed with OpenAI and that the OpenClaw project “will become managed by a foundation and will remain open and independent.” It was a more than reasonable exit for Steinberger, who will probably have received a significant sum of money and prestige, but that leads us to the eternal question: can you compete with the big companies? Short answer: probably not. Large companies have always been hampered by their own size when it comes to reacting quickly to new trends, and even the largest AI companies suffer from this same problem. OpenClaw was doing something that none of them had dared to do – partly because this type of agent has too much “power” – but with these projects and with startups that are beginning to emerge, the same thing always happens: either the big companies copy the idea and they end up burying the originalor they buy that startup that threatened to compete with them. For many startups, in fact, the “exit” or future strategy of the project happens to be bought by a large company. A creator who didn’t want to be CEO. Steinberger explained in his post how his project opened up “an endless string of possibilities” for him, and confessed that “yes, I could really see that OpenClaw could have become a giant company. But no, I’m not excited about that. I’m a creator at heart.” Steinberger has already created a company and dedicated 13 years of his life to it, and “what I want is to change the world, not create a big company, and partnering with OpenAI is the fastest way to bring this to the entire world.” One person’s first unicorn? The appearance of ChatGPT soon made will be spoken of the ‘Solo Unicorn’ phenomenon, a startup created by a single person and which, thanks to AI, would be valued at more than 1 billion dollars. We do not know what price OpenAI has paid for this signing, but it is likely that it will not reach that much. What does seem evident is that OpenClaw was the type of project and idea that certainly could have turned it into that “Solo Unicorn”. The era of custom AI agents. Sam Altman, CEO of OpenAI, confirmed the news in X. There it indicated that the creator of OpenClaw had joined OpenAI “to lead the next generation of personal agents”, and highlighted that “we expect this (personalized AI agents) to quickly become an integral part of our product offerings.” In addition, he assured that OpenClaw will remain open source, something that was probably one of the essential conditions that Steinberger set to join the ranks of OpenAI. And now what. That the project remains Open Source and independent is great news and theoretically that will allow OpenClaw to continue functioning as before, but having OpenAI’s resources can undoubtedly make it grow exceptionally. It remains to be seen whether that will end up having a negative impact in any way, but what also seems clear is that these types of “full AI agents” could soon also be an integral part of the offering of other AI companies. Welcome to the era of total AI agents. We had already partially seen what OpenClaw does with projects like Computer Use from Anthropic, Project Jarvis/Mariner by DeepM Mind u Operator from OpenAI itself. Both allowed AI would do things for us in the browser, but OpenClaw does things for us with all the applications on the machine on which we install it (the email client, the command console, etc.). We are facing an interesting stage for this type of systems. In Xataka | OpenClaw is one of the most fascinating and “dangerous” AIs of the moment. A Malaga company has come to the rescue

ZTE already has a phone with an AI agent that does things for you, and it’s sold out

Many technology enthusiasts have spent years imagining a future in which words are enough for the mobile phone to do the rest. Why open an application and navigate between menus if we can ask it out loud and that’s it? “Mark all messages as read”, “Order a car from my location”, “Open the discounts app and tell me what promotions I can use today”. In that ideal future, an agent should take care of everything without us touching the screen. Recent reality, however, has gone another way. Despite the visible rise of AI, interaction with mobile phones remains anchored in known dynamics. The most advanced version of Siri—the one Apple promised with agentic capabilities within Apple Intelligence— still not arrivingand the user experience has not changed substantially. In this context, ZTE has decided to take a step that until now no manufacturer had materialized: integrate a deep AI agent at the system level. The result is the Nubia M153. The mobile that turns agentic AI into its core. Far from being limited to accessory functions, the Nubia M153 is committed to real AI integration. According to Global Timesincorporates a preview version of Doubao Mobile Assistant, developed by ByteDance and ZTE. Although the assistant continues to be polished, it already demonstrates a striking ability to interact with applications and execute tasks that until now required user intervention. The demonstrations have gone viral. In X, un user shows how it is enough to ask him to hire someone to wait in line for him – a common activity in China – for the agent to execute the process. In another testa photo of a hotel is enough to reserve a room with the best available rate. The system identifies the establishment, opens the appropriate app and proceeds with the reservation. On Weibo, the scene is similar: “Order me three lattes and a Mixue ice cream,” says a young woman. The assistant gets going, asking for details when it needs them (size, sugar) and adding new tasks, such as finding the cheapest pizza service, buying movie tickets or converting photos into AI-generated images. An experiment that has exceeded expectations. The Nubia M153 is not a mass consumption mobile. It is only sold in China and in very limited quantities. According to SinaZTE launched about 30,000 units aimed mainly at users with a technical profile interested in testing new agentic capabilities, at a price of 3,499 yuan (about 425 euros at the exchange rate). Despite this reduced production, the device ran out a few hours after going on sale on December 1. Under the hood. IT Home details that The phone has a Qualcomm Snapdragon 8 with the Ultra label, 16 GB of RAM, 512 GB of storage and a 6.78-inch LTPO screen with a resolution of 1264 x 2800 pixels. Its camera system relies on three 50 MP sensors – main, wide angle and telephoto – and the design maintains a simple aesthetic, with a white back cover, black module and rounded edges. Are we ready for the agentic era? The launch also showed the first brakes. Shortly after the units reached users’ hands, several accounts of WeChat They started showing warnings of suspicious activity. The same thing happened on Alipay and Pinduoduo. Everything indicates that the assistant’s autonomous behavior activated automation protection mechanisms, designed to block usage patterns that do not fit with normal human activity. It is, in practice, the first pulse between new generation agents and the traditional platforms that dominate the Chinese digital ecosystem. Images | ZTE In Xataka | Almost all phones with optical zoom have the same problem. This Chinese brand believes it has solved it in a curious way

Parmersan cheese is extremely serious business in Italy. To the point of having his own agent in Hollywood

The most famous cheese in the world (with permission from Cabrales) has just hired representation in Hollywood. The Parmigiano Reggiano Consortium (which is what the Italians call what we simply call Parmesan) has signed United Talent Agency (UTA), one of the leading agencies in the film industry, to boost the presence of the Italian product in films, television series and platforms streaming on an international scale. The agreement. The strategy seeks to position this cheese with a Protected Designation of Origin in global productions in a more or less natural way, taking advantage of the fact that it is known throughout the world. According to statements by Carmine Forbuso, marketing manager of the Italian organization, the cheese represents “simplicity, quality and depth” thanks to only three ingredients, all natural, and centuries of tradition in its artisanal production. Exports of the product reached 53.2% in the first eight months of 2025. How’s the thing going? product placement. The global advertising placement market reached $33 billion in 2024 with a growth of 12.3% annually, which far exceeds the increase in traditional advertising investment. This marketing strategy has been experiencing four consecutive years of double-digit expansion, and as a marketing strategy it has doubled in size compared to 2018, so no, we are not just talking about the jar of soluble cocoa in ‘Family Doctor’. Specialized agencies as UTA ​​Entertainment Marketingwhich will represent parmesan, have doubled revenue in two years. And it seems to work: the success of this tactic lies in its naturalness, since more than 52% of US consumers They prefer these appearances over conventional advertisements. Some precedents in Hollywood. The history of product placement modern food has its founding moment in 1982when candy brand Reese’s Pieces focused all the attention on a crucial scene from Spielberg’s ‘ET.’ Mars refused to allow M&M’s to be used and it was quite a mistake, as Hershey, makers of Reese’s Pieces, tripled sales in two weeks. Currently it is a popular resource: in 2024, for exampleCoca-Cola appeared in 561 films and series. When it goes wrong. However, the forced placement It often generates rejection, and it is something that brands have to take into account. The oldest people in the place remember with a shudder the movie ‘My Friend Mac’ (curiously, a plagiarism of ‘ET’), full of covert advertising for Pepsi and MacDonald’s, and in whose restaurants even a musical number took place. When the brand interrupts the logical narrative of the film The viewer perceives it as invasive advertising, and that is what happened in this classic of eighties alien dandruff. Header | Brands&People in Unsplash In Xataka | Italy’s forbidden dish: a cheese so extreme in its preparation that the European Union had to put limits on it

The ChatGPT Atlas agent made my purchase at Mercadona and now I have a pantry full of garlic

a week ago I tried the new ChatGPT Atlasthe new OpenAI browser and, although it has a lot to improve, it seemed like a threat to Google’s dominance with Chrome. Today I put it to the test again, this time with a Plus subscription, and I wanted to check if agent mode is capable of hmake the purchase at Mercadona. Posing the situation It was the first time I used ChatGPT for something like this and I didn’t want to just give you a list. of the purchase, so first I asked him for ideas to make healthy recipes that are delicious. He offered me several options and when I decided on one of them, I activated agent mode and asked him to buy the ingredients at Mercadona. We have already talked about AI browsers are vulnerable to prompt injection attacks and OpenAI knows it. Before starting, a message appeared alerting me that using agent mode carries risks and I could use it with or without the session logged in. In my case I have chosen the logged in session because I wanted to see it work more easily, but as a precaution I have first deleted my payment details on the Mercadona website. Making the purchase Once the risks have been accepted, agent mode has been activated and the mouse has started to move through the Mercadona website interface. The sidebar shows the model’s entire thought and decision-making process while buying the ingredients to make a chickpea curry. In the video you can see the entire purchase process. The agent has been making decisions when he has found several items to choose from. For example, the recipe required an onion, but decided that it was more practical to buy a 1kg package. However, when choosing spinach, he decided that a package of baby spinach was better than the large package that is much cheaper. When he finished choosing ingredients he asked me to check it and I asked him to change the spinach. He has done it without question. The process has stopped when it has run into an insurmountable obstacle: it only had 10.28 euros and the minimum order on the Mercadona website is 50 euros, so I asked it to also include the ingredients of another of the recipes that it had suggested to me at the beginning, one for baked salmon. Since that one didn’t reach the minimum order either, I told him that I wanted to make it for four people and please don’t give me frozen salmon, but fresh ones. The agent adjusted the quantities and changed the salmon for a fresh one, but it still didn’t reach 50 euros, so I asked for something more creative.: to look for the most viral Mercadona products recently and add them to the basket. The purchase is made for you, but there is a problem When he was done, it was time to check the basket. I found that I had added garlic and also purple garlic. The normal garlic was fine, but the purple ones? I have reviewed the chain of thought and he was confused looking for purple onion. Mercadona calls it “red onion” and the agent has decided that it was better to add purple garlic because the color matched, even though they were a different ingredient. Regarding the viral products, I have chosen an advent calendar with makeup, smoked raclette cheese, cookie nougat and pistachio cake. The total amount was 66 eurosit is true that I have not expressly told you to adjust to 50 euros, but it seems to me that you have gone a little overboard. The agent has taken control of the browser and done exactly what he wanted: make the purchase for me. However, there is a problem and that is It’s very slow. I haven’t helped much either. Not having anticipated that there was a minimum order and the additional requests that I have been making, such as changing the quantities or choosing products by itself, has made it even slower. In total he has been thinking for almost 15 minutes, but if we take into account only the first part of the purchase for the chickpea curry, it has taken 2:14 minutes. More than two minutes to add eight items to cart. All the time I had the feeling that I would already have the order finished and paid for. Regarding reliability, I have to say that He has made fewer mistakes than I expected, but it is still necessary to check what you have added to the basket at the end because you can sneak in some garlic instead of onions, and I already have enough garlic in the pantry. Much more practical in other scenarios One of the use scenarios that OpenAI gave in the presentation of its new browser was precisely to make the purchase. After trying it, it is clear to me that the ChatGPT Atlas agent mode has a lot of potential, but not for making the purchase, that’s why I have tried another scenario where it can be much more useful: organize a trip. I asked him to find places for me to go on a getaway over the December long weekend, that were less than 2 hours by car from Valencia, with a specific budget and to look for them on Booking and Airbnb. In six minutes he gave me options for two different destinationsorganized in a table with price per night and highlights. Once I have decided, I only had to give him the personal information to complete the reservation. To organize a trip it is practical. Making the purchase is simply adding things to the cart, a much more mechanical process that we can do manually in a very short time. If we also encounter obstacles such as the minimum order or we are not completely clear about what we want, we end up losing more time than gaining it. Where the agent does offer more … Read more

The Captcha had become an excellent tool to fight the bots. Until Chatgpt Agent arrived

In 2003 a young Guatemalan named Luis von Ahn published a unique study along with two colleagues from the Carnegie Mellon University and an IBM researcher. That project described an automated test that was easy to solve for humans but practically insurmountable for artificial intelligence systems. Those researchers called that test Captcha. The concept was simple and focused on the already famous Moravec paradox: there are things that humans do effortlessly – such as solving the visual puzzles that the captcha propose – but that the machines fail to solve. The idea turned out to be one of those between one million. Von ahn He ended up creating an improved version to which He called recaptcha that not only verified that you were human: I did it helping Train and perfect OCR systems. That other complementary idea was another unique moment “Eureka!” De von Ahn, and in fact he ended up making him a millionaire in 2009, the year Google decided to buy his service. Then he would dedicate himself to another equally striking project (or perhaps more): Duolingo. A dizzying (and juicy) evolution While doing so, the captcha continued to grow and evolve. Putting it more and more difficult for machines that gradually stated that perhaps those tests were no longer so valid. From those basic captchas we ended up moving to recaptchas of all kinds in which visual puzzles not only challenged the abstraction capacity of machines, but also helped to train no longer OCR systems, but artificial vision systems to better recognize cars, buses, zebra crossings or, how not, fire mouths. But artificial vision and intelligence systems also improved, and that struggle between these tests (Captcha comes from Completely Automated Turing Tests to Tell Computers and Humans Apart) and the machines became more and more interesting. It was a Singular cat and mouse game with spambotsand when some AI system managed to overcome a captcha or any of its variants, the puzzles became more and more difficult. The story has repeated again. He did this Friday. It was then that a User of the R/OpenAI community in Reddit published captures of Chatgpt agent overcoming without apparent problems one of the recaptchas more popular and used today on the Internet. This is the system TURONSTIL from Cloudflare, which presents a small box with the text “I’M not a robot“(” I am not a robot “) to click on it. It seems very simple and simple, but it is not so much for the machines. As indicated in Cloudflare, this recaptcha variant analyzes various signals such as the mouse movement, the time we take to click, the “digital footprint” of our browser, the “reputation” of our IP or some Javascript execution patterns. With them determines whether the user is a human being or is suspicious of being a bot. And if there is suspicion, the system presents after that first captcha another in which we do have to solve some type of visual puzzle. AI does not know if it is human, it only tries to operate as such The funny thing here is that Openai’s agent solved the problem in an obvious way: seeing what was on the screen to act accordingly, something that had not been easy until now. The agent was even narrating what he was doing, and while doing that step he showed the following text: “The link is inserted, so I will click on the” verify that you are human “click to complete the verification in Cloudflare. This step is necessary to demonstrate that I am not a bot and be able to continue with the action.” Or what is the same: The machine was self -healing like a human being. It is something unusual, but perhaps not quite strange considering that 1) AI does not really know what it says and 2) has been trained to speak (and act, at least in a limited way) as a human being. Operator, Openai’s previous agriculture, passed it really bad With these systems. Does this mean that captcha are threatened with death? Probably not. This is nothing more than another battle of that war between the bots and the captcha. One that for example we saw Another AI victory In October 2024 but it did not involve the debacle of this type of user verification tests. As they point out In Ars Technicathe captcha systems have not stopped evolving. From those blurred and deformed texts we go to recaptcha in which we had to solve visual puzzles of all kinds that lately force us not to identify traffic lights, but to place an image in a specific orientation – a increasingly popular system called Arkose MatchKey– Oa having to identify some element of an image that does not agree on it. In fact, the most recent captchas are no longer so much to prevent bots from exceeding that barrier to slow them so much that making brute force attacks with bots does not compensate. Captchas not as a barrier, but as a bots brake An article of those responsible for Arkose Labs, creators of Matchkey, made it clear that “There is no completely impenetrable captcha“, and that with their proposal what they intended was” to introduce an economic deterrence or cost proof for the malicious behavior of the bots. “Or what is the same: than to develop a bot that exceeds those captcha was so expensive that It was not worth it. Thus, we should not worry much in case the agents of AI can overcome this test, because surely they will appear captchas that continue to assume an almost impassable barrier for these systems. It is precisely the same concept with which it works THE ARC-AGI 2 TESTwhich measures visual understanding and abstract reasoning of AI systems and that is so complicated that the best AI models, which are also very expensive, do not exceed 4% of cases in the best case (O3-Preview). Will there come a time when those agents of ia get … Read more

We believed that Chatgpt was just a very capable chatbot. Openai has just turned it into something very different: a real agent

We have been talking about artificial intelligence agents for a long time, but Openai has just converted that conversation into something much more tangible. The company has presented Chatgpt Agent, a function that turns its popular assistant into something more autonomous: now it is able to execute complex tasks using a virtual computer, with tools that allow you to navigate, program or even make decisions. From Agent Operator. At the beginning of the year it presented Operator, a tool that allowed ChatgPT to interact with web pages. Then Deep Research arrived, focused on writing long reports from multiple sources. The background idea was clear: go beyond the conversation and approach real tasks. What has been presented today is something like a tool that unifies all these previous advances. During the demonstration, those responsible for the project raised a daily situation: organizing a trip to attend a wedding. The agent was able to understand the context, find hotels, propose gifts, take into account the weather, the clothing code and even remember that a suit had to be bought. He did it by analyzing the message, accessing the web and acting step by step, as a person would. The difference is that everything happened within Chatgpt, without the need to alternate tabs or give instructions one to one. A virtual computer for AI. The key is that the agent is not limited to responding to text: it operates within a kind of virtual computer that Openai has given access. You can use a text browser to read pages quickly, a visual browser to interact with buttons and forms, and even a terminal to run commands, generate code and manipulate files. You can also work with spreadsheets, presentations, and access services such as Google Drive, Calendar or Github if the user authorizes it. What is under the hood? The model that drives chatgpt agent (specifically developed for this function, although without official name) was trained with complex tasks that required to combine multiple tools. Openai used reinforcement learning, the same approach that you already use in its reasoning models, to teach you to choose when to use the browser, the terminal or an API. The idea was to develop a solution capable of accurately deciding how to act based on each objective. In development. Images | OpenAI In Xataka | Goal is in a hurry to lead the AI that has done something unusual: it is building a data center in tents

Openai has just launched his new programming agent. The interesting thing is what you can do when nobody looks

Artificial intelligence (AI) currently presumes a leading place in the world of programming. More and more software developers resort to AI systems to write code, correct errors or automate repetitive tasks. Openai is betting again for this area with Codexhis new agent. It is a prior view phase tool that acts as a virtual collaborator. Its engine is Codex-1, a variant of O3 adjusted to better understand the needs of modern development. Among their promises are to generate more orderly code, follow instructions with greater precision, among other advantages. How does Codex work? Codex is not a simple assistant that suggests fragments of code. It is a software agent that operates in the background from the cloud. Once connected to your account GITHUByou can access your repository, read files, propose changes and execute tasks such as writing new functions, correcting errors or throwing tests. All this does it autonomously and safely, within an isolated environment (a kind of virtual cloud computer) that simulates your development environment. This should not only protect your system, but also allows Codex to execute tasks without affecting your local workflow. While he works, you can continue using your computer normally. The interesting thing is that it does not execute a single action in turn: it can take care of several tasks at the same time. For example, you can ask you to check a part of the code or Look for mistakes In another section of the project. Each task is managed separately and Codex will inform their real -time progress, allowing the user to review them. The tool is designed to adapt to the way of working of developers. In fact, it can be guided by specific files called agents.md, a kind of instruction manual that allows you to indicate what style follow, how to launch the tests or what practices should be respected within the project. Although Codex has just launched in preview, it is not an unfilming experiment. Openai engineers themselves have been using it for months as part of their workflow. So that? For Automize repetitive tasksrename variables, write tests or outline documentation. The system is also being tested in companies such as Cisco, where they seek to accelerate the development of new ideas, and in temporary, which uses it to purify errors, write automated tests and reorganize large code bases. OpenAi’s recommendation after this first round of tests is clear: assign well -defined tasks, launch several in parallel and experiment with different types of requests. Because the key is to find the exact point where the agent can display his full potential. Codex works remotely, but does it within a controlled environment. As we said above, each task that executes takes place in an isolated virtual machine, without direct Internet connection or access to external services. It can only interact with the code that the user provides and with the tools that are pre -installed using a configuration script. In addition, Openai ensures that he has trained Codex to identify and reject instructions oriented to the creation of harmful software, as tools for Hacking or malware. In any case, we must not lose sight of the fact that this proposal remains a product in research phase, with many aspects to improve. And it is still a generative AI: you can make mistakes or misunderstand certain instructions if the context is not well defined. Openai plans to introduce gradual improvements, such as the possibility of interacting with the agent during tasks, receiving more detailed updates or integrating it with tools such as incident managers. If you want to start trying Codex from today, you must keep in mind that it is not yet available for all users. For now, OpenAi has begun to deploy the tool Among the subscribers of the plans PRO (200 dollars per month)Enterprise and Team. According to the company, the users of the plans Plus and Edu will have “soon” access. Images | OpenAI In Xataka | Saudi Arabia has signed a check of 7,000 million dollars for Nvidia. Jensen Huang is now 12,000 million richer

The mobile the AI ​​promises, but I only see repeated tricks. The real ace under the sleeve is called “agent” and comes on the way

How could it be otherwise, AI has also reached our mobilemainly from the hand of two protagonists: Apple Intelligence on one side, Google Gemini In the other. However, in a matter of one year, we have gone from almost no manufacturer has ia that many protagonists of the industry now fight in the classic race. Samsung was a pioneer in some Galaxy AI’s first steps in which it enveloped the generative AI with profits: transcription voice recordings, simultaneous translation and generations of wallpapers, to name a few. More than a year later, AI suites for smartphone have multiplied. And yet, There is still a long way to go. Although these tools are a first step, the true pending revolution is the agricultural AI, capable of performing complex tasks autonomously. However, its implementation faces significant obstacles that limit their current potential. The next step is the agents After witnessing the MWC 2025, it is clear that IA has not been the central theme, although the most mentioned. Obviously, he has had his time of prominence, with Rimbombantes “AI” on several fronts: Motorola, Honor, Realme and Samsung have presumed suites for Android. Magicai, Next AI, Hyperai … prolong the journey of Galaxy AI, which by the way, has also overcome herself with the arrival of the Galaxy S25. How is better about the first -time version? We basically talk about two more skills, which revolve around to personalization. A short but sure step, that in essence it does not be far from What I could test Last year. Of those mentioned, the one that surprised me the most was Magicaithe honor suite (integrated into magicos) and its ambitious Honor Alpha Plan. For a simple reason: already introduces the concept of AI agents. It is not only an AI that responds, but one that can understand a complex objective, plan steps and execute actions proactively through different applications and services to achieve it (for example, plan and reserve a complete trip, not only look for a flight). These agents or agriculture contrast with current functions, which are more reactive or limited to a single task or app. Google has taken some shy steps with Gemini’s extensions, But there is a problem to treat: Interoperability between services. For an agent to reserve a table or ask for food, he needs to interact fluently with any restaurant or cast app, which in turn requires standardized APIS or a deep level of integration that, for the moment, does not exist in a generalized way. On the other hand, these demonstrations have an important limitation: they control the screen and do not allow the user to continue using the mobile while the agent acts. Leave that feeling that the machine does it for us, but it blocks us the machine itself. In the demonstration, the Honor’s AI could interact with the phone and the real world: Given the information that the brightness of his screen was very high, the mobile responded with the instructions while reducing it. Similarly, he is able to book a table in a restaurant, running its respective app automatically. Surely that is the correct direction of the mobile AIalthough it is early to say it. Along the way, several actors must be taken by the hand: not in vain, asking for online food is not just Google or Honor, in this case. Now, not only these agents are useful, but also the rest of the tools that manufacturers are adding to their customization layers. Galaxy ai is not the only It is easy to highlight the aforementioned Samsung Galaxy AI: it was a pioneer to incorporate tools based on generative, yes, together with Google. ‘Roda to search’ opened the way and is now present in a large number of Android mobiles. That is why both the Pixel and Galaxy, and the rest of the manufacturers, now have some Common functions. That are also far from the aforementioned agents. The difference marks the customization layer. Out of what Google lends its partners, the interesting thing is to know the commitment of manufacturers such as Honor, Xiaomi, Realme who already tries to follow Galaxy AI’s track. This is what I found. I start with the most unknown in Europe, a techno that could perfectly rival the strongest in the sector. Your techno aidoes not move away from what is seen in other suites of tools: restoration of old photographs, transformation of portraits into animations, a call assistant. In short, nothing new under the sun. Xiaomi, who has announced the arrival of his Hyperai In the new ones Xiaomi 15also proposes a similar conjunction: Ai Writing generates messages for us, Ai Speech Turn the voice into text, it has a translator in real time, generation of images, draft objects (example photos) and even a function destined to the recording of film video. Is more extensive than in other manufacturersbecause Xiaomi already had this AI suite in China for a long time. As we see, photography is one of the most handled aspects by AI, along with productivity tools. There are others that try to differentiate themselves, among them the mythical Motorola, now under the amparo of Lenovo. Motorola’s functions resemble what is seen in Project Astra de Google (now integrated into Gemini Live) Moto AI It is the name that receives its AI system, and not only affects the phones: Motorola adds its own agent to the most advanced headphones of the catalog. I witnessed a demonstration that pointed ways, no matter how much had some timely delay, given by the Internet connection. This system includes a voice assistant in the purest Gemini style: capable of organizing the agenda, saving annotations in a personal diary and of course, improvements in the photographic processing to avoid blurred shots. For now, Motorola does not have many models compatible with motorcycle AI: only the Edge 50 Ultra and the Razr 50 and Razr 50 Ultra. Realme, and his NextaiThey look similar to the AI ​​application to the mobile … Read more

Wisconsin man accused of setting fire to lawmaker’s office over TikTok ban

MADISON, Wisconsin, USA — A Wisconsin man who allegedly told police he tried to set fire to a lawmaker’s office because he was upset with the federal ban on the social media platform TikTok was charged Wednesday with multiple counts, including one of arson. Fond du Lac County District Attorney Eric Toney filed a complaint against 19-year-old Caiden Stachowicz, charging him with felony arson, making terrorist threats, attempted robbery and criminal damage. property. If convicted of all charges, he would face a sentence of more than 50 years in prison. Stachowicz, a native of Menasha City, was scheduled to make his first court appearance Wednesday morning. Online court records indicated Judge Tricia Walker set cash bail for him at $500,000 and ordered him to have no contact with Republican U.S. Rep. Glenn Grothman or his staff. He was also prohibited from possessing any dangerous weapons or materials to start a fire. Records showed Stachowicz appeared via video call from jail. His lawyer could not be contacted at this time. According to the complaint, a police officer responded to a fire outside Grothman’s office in Fond du Lac around 1 a.m. Sunday and saw Stachowicz standing near the site. The officer said that while he was working to put out the flames with his fire extinguisher, Stachowicz told him he started the fire because he doesn’t like Grothman. The officer handcuffed Stachowicz and took him to the police department. Firefighters and police quickly extinguished the fire, limiting the damage. During an interview at the police department, Stachowicz told the officer that he bought gasoline and matches to start a fire in Grothman’s office, according to the complaint. He said he tried to get into the office so he could start the fire inside, but he couldn’t break the window. He then poured the gasoline into an electrical box at the back of the building and around the front of the building, lit a match and watched it burn, the complaint adds. He noted that he wanted to burn the building because the US government was shutting down TikTok and Grothman voted “in favor” of banning the social network, according to the complaint. Grothman voted in favor of a bill in April last year that forced TikTok’s China-based parent company, ByteDance, to sell its US operation by Sunday. Stachowicz said he believed the closure violated his constitutional rights. He added that he had participated in peaceful protests in the past, but no longer believes peace is an option, the complaint states. “Caiden said it was a government building and he wanted to cause a disruption and make a point by starting the fire in the building,” according to the complaint. “Caiden commented that he wished the entire building had burned down.” When asked if he expected people to be inside the building, he said no and that he didn’t want to hurt anyone, and he didn’t want to hurt Grothman either. TikTok went down in the US on Saturday afternoon, but the platform was back up and running hours later after then-President-elect Donald Trump said he would try to give ByteDance more time to find a buyer. Trump signed an executive order Monday after taking office instructing the U.S. attorney general not to implement the ban for 75 days. When asked to comment on the charges, Grothman spokeswoman Noelle Young responded by saying Grothman would call The Associated Press directly. However, the lawmaker had not contacted the AP as of Wednesday afternoon.

Details revealed about suspect who killed Border Patrol agent in Vermont

Authorities confirmed that the suspect who allegedly killed a US border agent during a traffic stop in Vermontnear the northern border, is a German citizen with a legal visa. “Our partners at the Department of Homeland Security confirmed that the deceased subject is a German citizen who is in the United States with a valid visa,” an FBI spokesperson in Albany told Fox News Digital. Authorities said on Monday, January 20 that U.S. Border Patrol Agent David “Chris” Maland, 44, was hit by gunfire during a traffic stop on Interstate 91 between Newport and Orleans, Vermont. In a statement, FBI Albany said Maland was a veteran of the United States Air Force and added: “We are heartbroken for our partners and share their pain as they mourn the loss of their colleague.” Authorities said there were two suspects in the vehicle. They confirmed that one of the suspects was dead and the other was injured and is currently being treated at an area hospital. The FBI field office said it continues to work closely with federal, state and local officials to further investigate the incident. “FBI Albany has numerous resources in the area, including our Evidence Response Team (ERT), Victim Services, Digital Forensics, and dozens of Special Agents,” they said. On Monday night, authorities were seen using a robotic device to search a backpack near what appeared to be a body on the ground at the scene. The Border Patrol Union shared its condolences, saying in X: “Our hearts and prayers go out to the family, friends and co-workers of our fallen green brother in Vermont.” Keep reading:– US Border Patrol captures suspected Russian mercenary in Texas– New York Police warn that the Aragua Train is recruiting migrant minors.–Border Patrol detains a migrant who was on the terrorist list in New York

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.