Xiaomi is testing the mother of AIs for its cars, mobile phones and home. And there is no trace of Google or OpenAI

Xiaomi long ago stopped being simply a mobile brand and became one of the giants of the Chinese technology ecosystem. The company It no longer goes to volume, it goes to aspirationand to achieve this they want a remarkable user experience. A deep integration of artificial intelligence is inevitable to achieve this, and that is where MiClaw comes to life. Mike? Xiaomi has published on its website the details about MiClaw, your next step in exploring AI agents. It begins as a small-scale closed test, but it represents the pillars of what we will see in the near future on the company’s devices. What is. Xiaomi is testing with MiClaw the execution capabilities of its large AI models (MiMo) within the mobile-car-home ecosystem, both at the conversational level and in terms of execution capacity. It is a deep model, one with full access to every single event on the device, and able to reason for itself what action needs to be taken. What are you doing. The agentic AI prepared by Xiaomi follows a four-step model: Perception Association Decision Action In the text itself, Xiaomi gives us some examples of how its agent can make our lives easier. A refrigerator that can automatically check which consumables are missing at home, connect to our calendar and create a reminder that we have to make the purchase. You buy a train ticket, the agent reads the confirmation SMS, consults our calendar, and automatically prepares and schedules the trip. Why is it important. That Xiaomi is redoubling its efforts in AI is no coincidence. The company wants to be a benchmark in the ecosystem and conquer regions like Europe. Leading in artificial intelligence will be key for any of its product pillars: cars, home devices and mobile phones. Xiaomi wants to move away from the current interpretation-execution proposal, to integrate an agent capable of carrying out up to 20 consecutive and independently executed actions. At the moment, MiClaw works under closed beta on devices like the Xiaomi 17 Ultrabut Xiaomi’s idea is to develop an agent capable of working on any of its devices. Image | Xataka In Xataka | Is the newest the best for you? We compare the Xiaomi 17 Ultra against the Xiaomi 15 Ultra to see which is a better buy in 2026

OpenAI is now the bad guy of AI. GPT-5.4 will have to be very good to change that

He soap opera that has been assembled with the Department of Defense has made the perception clear in recent days for two of the leading companies in AI. Suddenly Anthropic She is the good one in the movie and OpenAI is the bad guy. And whether precisely for that reason or not, Sam Altman’s team has decided that now was the time to launch a new and promising AI model: GPT-5.4. Hello GPT-5.4. In it OpenAI official announcement explain how this new model will currently be available in two variants: GPT-5.4 Thinking and, for those who want “maximum performance in complex tasks”, GPT-5.4 Pro. We are looking at a foundational model that is better than ever in its reasoning, programming capacity and above all in one very fashionable thing: “agent flows”. Or what is the same: do things for us. The “Use My Computer” mode, protagonist. It is a free translation, but it is more or less what OpenAI highlights with what is probably the great novelty of this model. As they say in the announcement, this is their first model “with native computer use capabilities.” It is capable of taking control of our machine and doing things for us autonomously, completing complex cycles of action and solving problems that arise. Not only that: according to its creators GPT-5.4 “is our most token-efficient reasoning model, using significantly fewer tokens to solve problems than GPT-5.2.” Or what is the same: AI doing things for us will be cheaper and it will solve them even better. Use the computer better than us. The benchmarks certainly seem to point to fantastic performance in these tasks. In the OSWorld-Verified test, which measures a model’s ability to navigate a desktop environment using screenshots and virtual mouse and keyboard actions, GPT-5.4 achieves a 75% success rate. That is not only better than the 47.3% of GPT-5-2: it even exceeds human performance, which is 72.4% according to the creators of this benchmark. Other tests of this type that evaluate the ability of an AI model to navigate also make it clear that GPT-5.4 is clearly ahead of its predecessors. The ARC-AGI thing is scary. Machines were supposed to have a lot of trouble solving abstract reasoning problems that humans are naturally fantastic at, but oh well. In recent times we have seen how the ARC-AGI 2 test, which seemed like a challenge for AI models, has become increasingly acceptable for said models. GPT-5.4 gives a new bite to that reality, and in its Pro version it already manages to solve 83.3% of the tasks (73.3% in the standard model) when in GPT-2 the rate was 52.9%. It is a simply brutal jump, and although in other tasks that jump is not so notable (it programs somewhat better according to SWE-Bench Pro, but not much), it is clear that we are facing an extraordinary model. Perfect for OpenClaw? That ability seems to come to him that was not even painted OpenClawthe AI ​​agent that has become a phenomenon in this area in recent weeks. OpenAI ended up signing its creator and is in some way the “owner” of the projectand this performance in agentic tasks is expected to be very useful for everything OpenClaw does, which is basically that: manage your machine for you. That’s where GPT-5.4 can really come into its own. And you can trust him more. According to those responsible for OpenAI, GPT-5.4 is now better at answering questions that require seeking information from multiple sources, and “identifying the most relevant ones, particularly for “needle in a haystack” type questions, and synthesizing them into a clear and well-reasoned answer.” What’s more: they rate it as the model most focused on answering based on facts and say that it is 33% less likely to answer something that is false compared to GPT-5.2. But be careful: it is very, very expensive. These capabilities, however, will not come cheap. With this launch OpenAI has updated its prices, and it has done so by making it clear that if you want the best, you will have to pay for it. The “standard” GPT-5.4 model costs $2.50 per million input tokens and $15 for output tokens, while the Pro costs a whopping $30 and $180 respectively. Claude Opus 4.6, which was until now considered the best AI model, costs $10 per million input tokens and $25 per million output tokens: it was already expensive, but GPT-5.4 Pro leaves it almost as a “bargain” AI model. Trying to stop the bleeding. The model appears at a delicate moment. According to various sources, ChatGPT has lost 1.5 million users since announcing that they had reached an agreement with the Department of Defense. That decision provoked much criticism, a movement on networks that spoke of “cancel ChatGPT” and internal tensions. Before the scandal there was already talk of the potential appearance of GPT-5.4, but it is clear that the launch now takes on a double meaning. It doesn’t just have to be better than everyone else: it has to redeem OpenAI. And above all he needs a victory. Public perception seems clear: OpenAI has been suffering lately, whether from internal dramas, talent drains, or temporarily falling behind in the performance of its models. GPT-5.4 is not a simple evolution of its founding model, because what OpenAI needs is for this model to succeed and convince people to “love again” (figuratively, you know what we mean) ChatGPT. We’ll see if he succeeds. In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

NVIDIA was going to make the mother of all investments in OpenAI, but the era of favors between friends is over

NVIDIA has emerged as the pillar of artificial intelligence. Your chips They are the ones who move the more powerful data centers of the world and is getting billion-dollar investments to keep the wheel turning. At the same time, it has become one of the largest strategic investors of the artificial intelligence ecosystem. OpenAI She seemed to be his best friend, but that’s over. AND Jensen HuangCEO of NVIDIA, makes it clear: the next investments will probably be the last. Also in its great rival. Of 100,000 million. That was the magic figure of which we talked a few months ago. Recreating the schemes of “vendor financing“of the dotcom bubble, NVIDIA was going to finance OpenAI with $100 billion. In exchange, OpenAI would buy NVIDIA chips for the same value. It was a “trap” operation because the company would become the financier of its own premium client. With such an investment, it was expected that OpenAI will build data centers that they would need between four and five million NVIDIA GPUs: Huang already commented at the time that this represented double the total GPUs they distributed the previous year. In short: an absolute animal. And those 100,000 million were a mega-operation, yes, but one more of the many rounds of financing that the company led by Sam Altman. To 30,000 million. But in early February of this year, something unexpected happened. In what seemed like a historic turnaroundJensen Huang, cornered by the media after a Casual dinner at a Taiwanese restaurantcommented that there was never a 100% commitment to make that mammoth investment. The CEO of NVIDIA pointed out that they would surely continue making “the largest investment” they have made in their history, but although he did not give a figure, it was clear that nothing more than 100,000 million. How much? Lessmuch less: 30,000 million dollars. Good luck, OpenAI. Love broke, a love that began when Jensen Huang gave a DGX-1 server to Elon Musk back in 2016. Because it is not only that Jensen has commented that the figure will be around 30,000 million, but because he has mentioned that “it could be the last time” that they inject money into OpenAI. And the reason is very clear: “the reason is because they are going public.” From there, OpenAI will have to change its model completely and will be under the designs of the market. Big bets. NVIDIA, with this operation, shows that it is taking another course, one in which it prefers not to marry anyone and not commit in a truly serious way to a single company. Of course, OpenAI is not the only big operation that NVIDIA is going to get into. Another $10 billion is in store for Anthropic, OpenAI’s great rival both professionally and personally (since Altman and Amodei they can’t stand each other). Worse Huang has also mentioned which, again, will probably also be the last. They are also expected to go public. Fewer giants, broader base. OpenAI will have 110 billion soon. Apart from NVIDIA’s 30,000, Amazon will inject 50,000 million and SoftBank has committed 30,000 million. Huang has hinted that these two large operations could mark the beginning of a change of course. Instead of operations that can be counted on the fingers of one hand in giants, more investment in smaller companies. NVIDIA has gone investing more modest sums at other AI companies over the years. Model and software companies, infrastructure, robotics and even autonomous driving. It has been converting its GPUs and platforms into the standard on which it is founded the entire artificial intelligence industryand perhaps this break with giant companies like OpenAI or Anthropic marks a new beginning in which the focus is on supporting a broader ecosystem of partners. In this way, you will be able to continue shaping your objective: a range of more or less large companies that scale on your platform. Image | Steve Juvetson, NVIDIA In Xataka | AI engineers are closer to football stars than ever: NVIDIA has paid 900 million for one

OpenAI wants to eat Microsoft’s toast with its own GitHub

The relationship between OpenAI and Microsoft has gone through better times. And when their technological alliance began, it was one of the first examples that indicated to us that all this AI was going to last for a long time. And here we find ourselves, in a much more delicate and powerful panorama in equal measure. However, now this alliance is going through one of its most tense moments, and the culprit has names and surnames: OpenAI would be developing its own code hosting platform, rivaling GitHub, which belongs to Microsoft. We tell you all the details. Why does this matter? GitHub is one of Microsoft’s crown jewels in the technology ecosystem. If OpenAI continues with this project, it would be attacking one of the most strategic businesses of its main partner and investor, the same company that has injected billions of dollars in its growth since 2019. And over the last two years, their relationship has gone from being symbiotic to being, on certain fronts, directly competitive. What exactly happened. According to account The Information, several OpenAI engineers would have been fed up with the constant GitHub service outages in recent months. So they decided to consider building their own alternative. As the medium shares, the project is still in the initial phase, so it could take months to see the light (if it ever does), but the medium assures that the possibility of selling it as a product to OpenAI clients is already being considered. At the moment no company has confirmed or denied it. Renegotiations. The crazy idea of ​​​​building your own GitHub arrives in full renegotiation of the terms of that same alliance that they forged a few years ago, a process that, according to the Wall Street Journalhas become extremely tense. As the media reports, OpenAI wants to reduce dependence on Microsoft in computing and distribution, and is also trying close the acquisition of Windsurfthe AI ​​startup from which Google also got involved to keep a good part of the talent. The problem: Microsoft has access to all of OpenAI’s intellectual property under the current agreement, and OpenAI doesn’t want that to include the purchase of Windsurf. Microsoft, for its part, offers its own AI programming tool, GitHub Copilotwhich competes directly with what OpenAI is trying to build. Media fight. According to the WSJseveral OpenAI executives have gone so far as to internally debate what they describe as a “nuclear option”: publicly accusing Microsoft of anti-competitive behavior and seeking a federal regulatory review of the contract. That is, take the fight to the legal and media arena. That this possibility is on the table says a lot about the level to which the conflict has reached. Both companies have also been fighting for some time over the percentage of participation that Microsoft would have in the new for-profit entity which OpenAI wants to become. OpenAI has until the end of the year to complete that transformation or lose $20 billion in funding it had committed to. How we got here. Microsoft first invested in OpenAI in 2019 with 1 billion dollars. In exchange, it gained the exclusive right to sell OpenAI’s tools through its Azure cloud and preferential access to its technology. For years, it was a sweetheart deal for the two: Microsoft positioned itself as a leader in enterprise AI and OpenAI got the infrastructure and money to grow. But now the market has changed a lot, and what was a relationship of mutual dependence has become a race in which the two compete on almost the same fronts: consumer chatbots, productivity tools with AI, solutions for companies, etc. For now, both companies maintain the official speech of optimism about their ability to continue building together. But the cracks are increasingly difficult to cover. In Xataka | Aragón is becoming a Spanish data center giant thanks to Amazon. There is still a big unknown

OpenAI had to choose between being the star app of the US army and its users. And the users have chosen for it

Last Saturday there were 295% more uninstallations of the ChatGPT mobile app in the United States. Many users felt terrible that OpenAI reached a theoretically unethical agreement with the US Department of Defense to replace Anthropic, and they have punished it with a “Cancel ChatGPT” movement on social networks which has also had an impact on those uninstallations. what has happened. The consulting firm Sensor Tower, which monitors the status of mobile application stores, has indicated that the ChatGPT uninstall rate has increased by 295% on Saturday, February 28 compared to the previous day. Normally, the uninstall rate is around 9% from one day to the next, but that day it was clear that many users decided to get rid of the app at the same time. The reason is obvious. The Pentagon vs. Anthropic. The pentagon it had been months working with Claude, Anthropic’s AI, which was already used on classified documents. Anthropic had made it a condition not to use its AI for mass espionage and the development of autonomous weapons, but the Department of Defense (DoD, which many now call the “War Department”) wanted Anthropic remove those limitations. Anthropic refusedand that’s where OpenAI comes in. and opportunistic. Sam Altman first praised Anthropic’s stance. A few hours later he announced that they had reached an agreement with the DoD to replace Claude with ChatGPT. This has been widely criticized for OpenAI’s lack of ethics and opportunistic attitude, and led to a “ChatGPT cancellation” movement which has had an immediate impact on the downloads and uninstallations of this chatbot. Altman wants to clear things up. He OpenAI announcement It was unclear whether OpenAI actually imposed the same limits that Anthropic had imposed, but Altman soon announced that had added amendments to the agreement to avoid any confusion. Apparently they have been added protections against mass surveillancebut nothing is mentioned about the development of lethal autonomous weapons. Punishment for OpenAI. Not only has it been noticeably uninstalled, but in the opinions of the ChatGPT app many users have given a single star out of five in a very high proportion: those bad opinions grew by 775% on Saturday and then by 100% on Sunday according to Sensor Tower. Five-star reviews fell by 50%. Claude has overtaken ChatGPT in downloads as a result of the latest events with the Pentagon. Source: Appfigures. And Claude already surpasses it in downloads. Another consultancy that monitors the download market, appfiguresindicated that on Saturday Claude’s downloads surpassed those of ChatGPT in the US for the first time. In fact, Claude has become the most downloaded app in at least six countries outside the US: Belgium, Canada, Germany, Luxembourg, Norway and Switzerland. Streisand Effect. We are facing another case of Streisand effect: trying to censor certain information or a certain company ends up being counterproductive. The Pentagon tried to make Anthropic the bad guy, but what has happened is that the company is now seen as the great defender of ethics and “AI alignment.” This has made people perceive it as a more morally respectable option than ChatGPT. But Anthropic has problems. According to Reuters Several US government departments and agencies have made the switch to OpenAI and have begun to stop using Anthropic models for their work. That is already a problem for Anthropicbut even more so is the fact that their recent investment round, in which they raised 60,000 million dollars, could be in danger. If the DoD decides to label Anthropic a “supply chain risk,” its contracts and agreements with dozens of companies would be at risk, and its own future as a company would be at risk. It would be an extraordinary measure and it seems unlikely that the US would go to that point, but nothing is certain today. Image | Village Global In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

While Anthropic goes on the US blacklist, the Pentagon already has someone to succeed him: OpenAI

The pentagon gave an ultimatum to Anthropic to accept the unlimited use of its AI models for applications of all kinds, including espionage and military use. The deadline arrived, at 5:01 p.m. this Friday, February 27, and Anthropic said no: he would be faithful to his principles. The sword of Damocles has fallen on the company led by Dario Amodei and the United States has completed its threats. How did he communicate? A few hours ago, United States Secretary of War Pete Hegseth, for the Pentagon, Anthropic is already a “risk to the supply chain.” The context. This chronicle of a death foretold has been meeting its deadlines and everyone has remained in their initial position: Anthropic rejected the Pentagon’s lawsuit over concerns about the use of AI for mass civilian surveillance and the development of weapons capable of firing without human intervention. The company behind Claude has already announced that he will contest. We will have to see the cost of maintaining his position. The United States will apply a sanction that until now we had only seen applied to companies from rival countries, Huawei is one of the clearest examples. What’s going to happen now. Leaving aside the fact that the president of the United States refers to Anthropic as “a radical left-wing and woke company” on your social network Truth Socialthe US Ministry of Defense has carried out its threat, which has come into effect immediately: It will terminate its contract with Anthropic, valued at up to $200 million, and as announced Peter Hegseth, no contractor, supplier or partner doing business with the United States Armed Forces may do business with Anthropic. There will be a six-month period for the Pentagon and other government agencies to transition Claude to alternatives. OpenAI said yes. The United States already has a company to provide its services to the Pentagon and other agencies: OpenAI. Sam Altman announced the agreement to deploy its models on its classified network explaining that the Department of Defense had shown a “deep respect for security” and that both AI security and broad benefit sharing are the foundation of its mission. Among the security principles specifically mentioned by Altman are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems. According to the CEO of OpenAI, the War Department is aligned with these principles. Likewise, he explained that they will apply technical safeguards to guarantee the correct behavior of their models. Claude’s shadow is long. Saying goodbye overnight to your reference AI company (even with that transition period) and putting a veto on other companies working with it is a tricky measure to put into practice as it is behind recent strategic operations, such as Maduro’s arrest and others imminent. Likewise, it leaves projects such as that of Palantirwhich Claude uses. behind the scenes. According to AxiosDeputy Secretary of Defense Emil Michael was in talks with Anthropic to offer a deal just as Pete Hegseth dropped the bomb on X/Twitter. This theoretical agreement would have allowed the collection or analysis of data on US citizens, such as location, web browsing or financial information. At the moment it is unknown if this interest of the Pentagon in collecting personal data legally applies to OpenAI. In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it Cover | Tomasz Zielonka

Now OpenAI lets you share your friends’ phones with ChatGPT. The question is why would anyone want to do that?

ChatGPT is changing in leaps and bounds: we already know that announcements will arrive sooner rather than later and how will they work and in recent days OpenAI has sent its users an email like the one you see above informing about an update to your privacy policy. The first aspect that changes: the appearance of the mythical “Find friends in OpenAI services” in a step to become a more social platform synchronizing contacts. The message in question: “You can now choose to sync your contacts to see who else is using our services. This is completely optional.” Finding friends in OpenAI apps. The OpenAI privacy policy page allows you to consult the current version and the previous onewhere we see that there is a section that was not there before: in addition to account information, user content, communication information and other information you provide, another one appears: “Contact Data” is new. What it literally says: “If you choose to connect your device contacts, we upload information from your device’s address books and check which of your contacts also use our Services. If any of your contacts are not already using our Services, we will inform you if they sign up for our Services later.” What does it mean. That is, OpenAI wants to access and store the information from your phone’s phonebook to divide your contacts’ phone numbers into two: those who already have an account and those who don’t. The idea is to find contacts that you know who use tools like Sora or the group chats through suggestions. But also take note of those who don’t use their services as they let you know if they sign up later. The option is not yet operational and OpenAI has not yet explained how it will be implemented in the app. What we do know is that it is optional (that is, you can refuse) and that what the company led by Sam Altman will save are the phone numbers in your device’s address book. Neither the names nor the details of the entire notebook. How it will work. OpenAI has detailed that the phones are hashed to later compare them with existing OpenAI accounts, from which the suggestions appear. The next question is: how long do you store them? OpenAI itself has that question in its help sheet, but the answer is not clear at all. After this process of searching for matches between your agenda and its database, contact lists are half deletedbecause it also ensures that “encrypted phones could be kept on OpenAI servers to facilitate connection functions.” Everything indicates that OpenAI will periodically check if anyone of your contacts has noticed. In any case, we still don’t know the answer. Of course, you will have the option to revoke the permissions. Why do you want to know that haha ​​salu2. The company has not offered images of what the experience will be like or what functionalities it will unlock for those who agree to share this information. So why would you want to accept this option? For now, to see suggestions from users in your calendar, like Manolo the plumber or your cousin Pili from Utebo, with whom you may not talk too much about your projects in group chats or your experiments in Sora. If you decide to connect with that person, that person may receive a notification to follow you. He follow back of a lifetime, come on. The small print. With what we know and taking into account the use we make of OpenAI services, perhaps the option of becoming friends with the plumber via Sora is not essential. However, even if you do not agree to participate, anyone who has your number and agrees to synchronize their contacts will be giving your number to OpenAI. Even if you don’t have an account. It’s all advantages (for OpenAI). Finding advantages for users of this optional feature costs, just the opposite of seeing the benefits of OpenAI. To start, weave a network that invites you to use OpenAI tools because your environment uses it (I stay because everyone uses it). Likewise, by seeing who is not on the platform, OpenAI can also incentivize you to invite them to encourage their organic growth at a critical time where competition is fierce. Connecting contacts also has a potentially interesting side: that OpenAI develops more collaborative tools that invite you to use and spend more time in the app. Finally, with this function the company behind ChatGPT can establish a social graph on interests, educational levels and professional environments, cinnamon sticks to improve personalization or simply to help them validate identity and security in the case of minors. In Xataka | We already know how ads will work on ChatGPT. We have bad and not so bad news In Xataka | Anthropic is growing so fast that OpenAI’s problem is growing at the same speed: losing the market that matters Cover | OpenAI communication with Mockuphone and Codioful (Formerly Gradient)

Programming is the new board of AI. OpenAI and Anthropic have made it clear with GPT-5.3-Codex and Claude Opus 4.6

When ChatGPT broke out in November 2022, OpenAI seemed unrivaled. And, to a large extent, that was the case. That chatbot, despite its errors and limitations, inaugurated a category of its own. However, in the technology sector advantages are rarely permanent and, in 2026, the position of the company led by Sam Altman It’s a far cry from what it had then. Google has managed to attract the general public with Nano Banana Prowhile Gemini steadily gaining ground as an artificial intelligence chatbot. At the same time, ChatGPT’s market share has fallen significantly in some markets. Anthropic, for its part, has established itself as a reference in software engineering and has become one of the preferred tools among programmers. In this race to set the pace of AI, this Thursday we witnessed a curious movement: the almost simultaneous arrival of two models focused on programming, GPT-5.3-Codex and Claude Opus 4.6. The coincidence does not seem coincidental and reflects the extent to which the major players in the sector compete to define the next step, in a scenario where the main beneficiaries are, once again, the users. With these new models already on the table, the question becomes what they really contribute. There are plenty of promises and they are also beginning to appear benchmarks comparable that help to place them. So, therefore, it is time to look in a little more detail at what OpenAI and Anthropic propose for those who use AI as a development tool. GPT-5.3-Codex and Opus 4.6 enter the scene: what each promises to developers GPT-5.3-Codex is presented as a model focused on scheduling agents which seeks to expand the scope of what a developer can delegate to AI. OpenAI claims that it combines improvements in code performance, reasoning and professional knowledge over previous generations and is 25% faster. With this balance, the system is oriented to prolonged tasks that involve research, use of tools and complex execution, while also maintaining the possibility of intervening and guiding the process in real time without losing the work thread. One of the most striking elements that OpenAI highlights in this generation is the role that Codex itself would have had in its development. The team used early versions of the model to debug training, manage deployment, and analyze test and evaluation results, an approach that accelerated research and engineering cycles. Beyond that internal process, GPT-5.3-Codex also shows progress in practical tasks such as the autonomous creation of web applications and games. The company has published two examples that we can try right now by clicking on the links: a racing game with eight maps and a diving game to explore reefs. Anthropic’s turn comes with Claude Opus 4.6, an update that the company presents as a direct improvement in planning, autonomy and reliability within large code bases. The model, they claim, can sustain agentic tasks for longer, reviewing and debugging its own work more accurately. The idea is that we can use these capabilities in tasks such as financial analysis, documentary research or creating presentations. Added to this is a context window of up to one million tokens in beta phase, a leap that seeks to reduce the loss of information in long processes and reinforce the usefulness of the system. Beyond the core of the model, Anthropic accompanies Opus 4.6 with a series of changes aimed at prolonging its usefulness in real workflows. Among them there are mechanisms such as the so-called “adaptive thinking”, which allows the system automatically adjust the depth of your reasoning depending on the context. Configurable effort levels and context compression techniques designed to sustain long conversations and tasks without exhausting the available limits also appear on the scene. Added to this are teams of agents that can be coordinated in parallel within Claude Code and deeper Excel or PowerPoint integration. While OpenAI’s product, GPT-5.3-Codex, is not yet available in the API, Anthropic’s is. Maintains the base price of $5 per million entry tokens and $25 per million exit tokenswith nuances such as a premium cost when the prompts exceed 200,000 tokens. Measure who wins with numbers? When trying to put GPT-5.3-Codex and Claude Opus 4.6 face to face, the main obstacle is not the lack of figures, but rather their difficult correspondence. Each company selects evaluations that best reflect its progress and, although many belong to similar categories, they differ in methodology, versions or metrics, which prevents a direct reading. In this type of models, this fragmentation of results is part of the state of the technology itself, but also requires cautious interpretation that separates technical demonstrations from truly equivalent comparisons. Only from this filter is it possible to identify the few points where both systems can be measured under comparable conditions and draw useful conclusions for developers. If we restrict the analysis to truly comparable metrics, the common ground between GPT-5.3-Codex and Claude Opus 4.6 is limited to two specific evaluations identified through our own research: Terminal-Bench 2.0 and OS World in its verified version. The results show a distribution of strengths rather than a clear supremacy. GPT-5.3-Codex marks a 77.3% in Terminal-Bench 2.0 compared to 65.4% for Opus 4.6, which points to greater efficiency in terminal-centric workflows. On the contrary, Opus 4.6 reaches a 72.7% on OSWorldsurpassing the 64.7% of GPT-5.3-Codex in general interaction tasks with the system, a contrast that reinforces the idea of ​​specialization according to the environment of use. So we could say that the capabilities described by each manufacturer point to tools that are no longer limited to generating code, but rather seek to participate in prolonged processes of analysis, execution and review within real professional environments. This transition introduces new selection criteria that go beyond punctual performance. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

OpenAI going from 70% share to 46% is the symptom of something more worrying: they have entered panic mode

Between January 2025 and January 2026, ChatGPT has lost almost 24 points of market share among daily users of its mobile app in the United States, its main market. Gemini has gone from 14.7% to 25.1%. Grok, from 1.6% to 15.2%. In web traffic the pattern repeats itself. ChatGPT rose 50%, from 3.8 billion to 5.7 billion views. Gemini jumped 647%, from 267 million to 2 billion. OpenAI is still the leader, but it already has a real alternative in all aspects. Why is it important. When you lose 24 share points while the market grows 152%, something has broken along the way. And it’s not just technical leadership. It’s the narrative. Sam Altman sold OpenAI as the company that would reach the market first AGI. That promise mobilized a lot of capital, a lot of talent and a lot of faith. The AGI has not arrived yet. Meanwhile, OpenAI has had to become something else: a conglomerate that does quite a bit more, from chatbots to chips to a wearables. In Xataka The AI ​​of 2026 brings an uncomfortable truth: the most useful will be the one that watches us the most The business model problem. OpenAI… It earned $13 billion in 2025. It lost $12 billion in the last quarter alone. It has 40 million paying subscribers at $20 a month. There are 800 million monthly. It is still insufficient. The company needs AI to function as a business service, not just a consumer product. But there he is losing to Anthropic, which leads with 32% of the business market compared to 25% for OpenAI. Claude Code has become the favorite option for developers: 42% share compared to 21%. Google has 20% and counting. Meta controls 9% with Flame. DeepSeek barely 1%, but its model shows that the level of OpenAI can be replicated without the same resources. The great advantage of Google. Google doesn’t need you Gemini earn money tomorrow. It can afford low prices and red numbers for a long time, while perfecting the technology and integrating it into products that already work: the search engine, YouTube, Android, Chrome… OpenAI depends on ChatGPT to survive. The snowball in debt and payment commitments is too big. Sundar Pichai’s strategy is clear: not to place advertising on Gemini to maintain trust, but to try placing ads on the AI-powered search engine, where users see them as something to be expected. Google can learn without risking its brand. Yes, but. Altman has reacted with quite aggressive diversification. OpenAI no longer wants to be just a modeling company, but rather control multiple layers: from hardware to consumer applications. The objective is become too big to fall. That a hypothetical failure represents a systemic risk for the US economy, as happened with the banks in 2008. {“videoId”:”x9u4ml2″,”autoplay”:false,”title”:”Does Gemini 3 surpass ChatGPT? This is Google’s new AI”, “tag”:”Webedia-prod”, “duration”:”156″} behind the scenes. The dispersion is becoming noticeable. Banking is reducing its dependence on OpenAI. 18 months ago, half of AI use cases at large banks used OpenAI models. By the end of 2025, that figure had fallen to a third. While OpenAI loses focus, Anthropic wins them. Projects to be profitable in 2028. OpenAI, having moved the goal along the wayin 2029. Featured image | Xataka In Xataka |Google had a practically unsolvable dilemma with AI and its search engine. So you have chosen to create a subscription (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news OpenAI going from 70% share to 46% is the symptom of something more worrying: they have entered panic mode was originally published in Xataka by Javier Lacort .

That Oracle speaks out on the soap opera between NVIDIA and OpenAI is a bad sign. That it will not have benefits until 2029, too

Oracle counted in a tweet that the agreement between NVIDIA and OpenAI has “zero impact” on your financial relationships with the company that owns ChatGPT. This is more complicated than it seems, because the AI ​​business could end up collapsing if a large company like NVIDIA or Oracle shows even a hint of doubt towards OpenAI. The latest statements by Jensen Huang, CEO of NVIDIA, have made the market nervous, although Oracle’s path is not very encouraging either. Why is it relevant? Oracle just announced that will raise between 45,000 and 50,000 million of dollars this year through debt and equity issuance to build cloud infrastructure for its large AI clients. Among them, OpenAI stands out with a contract of 300,000 million of dollars for five years that starts in 2028. The problem is that OpenAI is not profitable right now, and Oracle needs OpenAI to raise capital so that it can pay it. It is a circular financing circuit where everyone depends on everyone Keep signing checks. The numbers don’t add up yet. The contract with OpenAI involves about $60 billion annually starting in 2028. To fulfill it, Oracle must buy approximately 400,000 chips NVIDIA’s GB200, with an estimated cost of $40 billion just for its flagship data center in Abilene, Texas. Meanwhile, OpenAI’s total revenue in 2025 was around $13 billion, according to Bloomberg. Oracle is betting its bottom line that a company that currently burns more cash than it generates can pay bills equal to five times its current annual revenue. The alarm signals. In January, investors accused Oracle of hiding the need for more debt to finance its AI infrastructure, according to Reuters. Oracle’s debt-to-equity ratio is at 6x, and credit default swaps reached levels not seen since the 2008 financial crisis in December, according to point Bloomberg. In addition to all this obstacle, Oracle’s action has fallen 50% from its September peak, when it announced precisely the agreement with OpenAIerasing some $460 billion in market capitalization. ANDnegative n until 2029. Developing data centers for AI has pushed Oracle’s free cash flow into negative territory, where it is expected to remain until 2030, according to data compiled by Bloomberg. Jefferies esteem that the company will need to raise more funds in 2027 and subsequent years, since cash flow will not return to positive until 2029. Oracle plans to raise 50 billion: half through equity, with convertible preferred securities and a share sale program of up to 20 billion, and the other half through a single bond issue in early 2026. Between the lines. What really worries the market is the structure of mutual dependence. NVIDIA funds OpenAI. OpenAI pays Oracle. Oracle buys chips from NVIDIA. Everyone’s income growth depends on everyone else continuing to write checks. When Jensen Huang, CEO of NVIDIA, declared to journalists that the 100 billion agreement with OpenAI “was never a commitment” and that they would invest “step by step”, Oracle had to come out with that tweet to calm the waters. And that tweet is precisely the type of communication that worries investors. Cover image | IEEE Awards, Hartmann Studios, Wikimedia Commons In Xataka | The CEO of Airbnb is clear that there are companies with too many meetings: his trick is to follow Jony Ive’s philosophy

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.