OpenAI has signed countless billion-dollar agreements with other companies. We are discovering that they are made of paper

OpenAI has announced that will abandon development of Soraits AI video generator, just six months after the launch of its standalone app. Disney, which had announced a $1 billion investment in OpenAI in exchange for licensing its characters for Sora, has confirmed that the deal will not go ahead. The money never changed handsand joins others in recent weeks that send a worrying message. One that calls into question the real strength of the most valued company in the AI ​​sector. Paper agreements. In recent months, OpenAi has been the protagonist of a frenetic string of announcements that have shaken the stock markets and sent prices skyrocketing. Analysts like Ed Zitron have documented in detail how these agreements are for now more smoke than anything else: all of them were “letters of intent”, conditional commitments that now seem increasingly difficult to come true. There are examples everywhere. The NVIDIA case: the one hundred billion that did not exist. In September 2025 NVIDIA announced a “strategic partnership” with OpenAI to invest “up to 100 billion dollars” and build 10 GW of data centers. Four months later, the company led by Jensen Huang considerably reduced that investment to 30 billion dollars. Jensen Huang recently stated that this will “probably” be the last round he will enter into OpenAI and clarified that the statement made it clear that this was a “letter of intent”, not a contract. Months later in NVIDIA’s quarterly results, the agreement is described as “an opportunity to invest in OpenAI.” Not a single dollar has been sent to him, and it is not certain that he will. The AMD case: 34% rise in the stock market. In October, another mega-deal. amd announced a “definitive” agreement with openAI to deploy 6 GW of data centers. The company indicated that would potentially generate “tens of billions in revenue,” and AMD shares rose 34% in one day. Four months later, in quarterly results from the company, zero mentions of OpenIA. IN November 2025, in AMD’s 10-Q filing, AMD’s outstanding obligations on contracts with a duration greater than one year were 279 million dollars. There were practically no mentions of OpenAI. Many promises, no reality. The Broadcom case: a confusing order. Broadcom too was going to deploy 10 GW of “AI accelerators designed by OpenAI” at the end of 2029, but at the moment there is still no evidence that chip sales have occurred and there are no clues in OpenAI’s latest quarterly results, which do not mention this agreement anywhere or its impact. Broadcom CEO he did tell investors that they expected to deploy 1 GW of computing in the form of XPUs in 2027, but did not give details of how they planned to reach 10 GW in 2029. And also revealed that “we do not expect much in 2026” from the contract with OpenAI, because the return will focus on 2027, 2028 and 2029. The Disney case: a very bad sign. The agreement with Disney announced in Decemberincluded the company taking a $1 billion stake and will license more than 200 characters from Disney, Marvel, Pixar and Star Wars for use on Sora. It was the type of agreement that validates a company before the general public, especially since Disney does not sign agreements with just anyone. However, the agreement was entirely built on stock warrants, not cash, they point out in Deadline. By abandoning Sora, Disney has withdrawn without consequences and without having transferred a dollar. Another paper agreement. The SKHynix case: where are we going to get so much memory from?. SK Hynix and Samsung intended to provide 900,000 RAM wafers per month for OpenAI’s Stargate project, but the result of these intentions has been null. That agreement would have consumed 40% of world production of DRAM in the midst of the crisis of this type of components. The mysterious Norwegian data center case. OpenAI promised in July 2025 that would boost construction of an AI data center belonging to the Stargate project but which would be in Norway. It was then expected that this center would have 100,000 NVIDIA chips by the end of 2026, and that it would expand “significantly” from that figure. There has been no news of this development since then. Nobody asks questions. Zitron complained in your reflection how financial analysts seemed not to ask the necessary questions when faced with these announcements. He explains that OpenAI had committed about $300 billion in different agreements to create new data centers, but its real income is around $4.5 billion a year and it is expected that it will have losses of about $14 billion in 2026. Despite everything, Zitron criticizes, the stream of advertisements continues to work because it generates increases in the stock market and positive headlines. The difference between contracts and letters of intent was buried in the fine print of the advertisements that almost no one reads. And the examples continue. In fact, the advertisements do not stop coming despite everything and everyone. OpenAI announced in February an investment of 110 billion dollars by SoftBank (30 billion), NVIDIA (30 billion) and Amazon (50 billion). SoftBank itself is “testing its lending limits” with that bet, which we will see if he can complete. Amazon’s 50 billion are divided in two phases: a first of 15,000 million that should be executed on March 31, and another of 35,000 million dollars whose deadlines depend on several events. Too many agreements that must demonstrate something critical: that they are not made of paper. In Xataka | Problems are multiplying for OpenAI in the race for AI. Your solution: go from 4,500 to 8,000 workers

OpenAI takes a step back in the AI ​​race to completely recalibrate

OpenAI Sora has closed. His generative video AI that he has proudly shown on numerous occasions and which earned him a juicy $1 billion deal with Disney it no longer exists. The news fell like a bomb a few hours ago followed by the withdrawal of that billion-dollar Disney investment. Although there are those who point out that OpenAI is in trouble, those problems are not so much economic as lack of direction, and closing Sora seems only a step backwards in the long-distance race of OpenAI and AI. Go public this year and start harvesting after everything planted. In short. It’s the news of the day. Less than a year and a half after launching it, OpenAI says goodbye to Sora. In his day (February 2024, how time flies) we were amazed at what this generative AI could do. It was just 60 seconds of video and had some huge flaws, but it was one more step in the artificial intelligence race that positioned OpenAI at the forefront of the industry. Then other competing models arrived, culminating with a Seedance 2.0 that has consumed the entire Internet to plagiarize absolutely anything. Like all the others, wow. Issues. But although striking, Sora was a tool that didn’t seem to add up. While other services have integrated their generative AI models within an ecosystem or applications (the aforementioned Seedance 2.0 in suites AI or in the video editor CapCutfor example), Sora was there, away. The aforementioned contract with Disney was worth it, but it did not seem to be part of something larger, of a “creative suite” (if generative AI can be classified as such). He simply existed, and the worst thing was that others were passing him on the right. Eggs in many baskets. It was, in short, another product of an OpenAI that had eggs in many baskets. It was reaching dizzying numbers in different rounds of financing, setting up data centers, buying a lot from NVIDIA (depending a lot on NVIDIA, too) and launching products like crazy. OpenAI wanted to touch all the keys: And there are some other products, as well as a super app to integrate all that that was not being integrated into other sites. The philosophy was simple: if we are in everything, something will work, but the result has been the opposite and, as my colleague Javier Pastor said a few days ago, wanting to be the bride at the wedding and the dead man at the funeral It is having consequences. The competition tightens. While OpenAI diversified and allocated resources to touch all suits, Anthropic (which is not just a rival, it is a public enemy) was dedicated to two things. It’s not that Anthropic doesn’t have a browser or a video generator: it’s that they don’t even have an image generator. In exchange, what they do have They are functional, precise models and that they do things very well, especially in the field of amateur development with the vibe coding. Focusing on one thing and doing it very well is something that the market is seeing valueto the point that Anthropic is raising a lot of money in different recent financing rounds. In a short time, it has gone from being valued at 183,000 million to arrive up to 380,000 million, and that has had all the fuss with the United States government and the loss of contract with the Department of Defense. Money, too. And money moves everything, and while ChatGPT sweeps the consumer segment with more than 2.5 billion daily queries, you have to wonder how many paying users there are. Where the money really is, which is in business use, Anthropic controls the market with 32% compared to OpenAI’s 25%. And in programming, the distance is astronomical: 42% compared to 21%. In fact, OpenAI has seen how your business share has fallen from 50% in 2023 to just 25% today. As we say, this is where the greatest potential for growth and commercial performance is, and OpenAI is realizing that being focused on so many fields has caused them to be distracted. Or what is the same: they have covered more than they could bite off. Public company. The closure of Sora responds to a multitude of factors, but in the background there is something more important. NVIDIA has already said that the millionaire mega-rounds are overand it has done so just before the expected IPO of both OpenAI and Anthropic. When both go on the stock market, they will have to face another financing model. They will need products that generate profits to attract investors to buy shares, and right now, the one that is best positioned is Anthropic. OpenAI has a lot, but nothing makes it complete. Anthropic has less, but it is very efficient, and getting rid of Sora seems like a move to release ballast before becoming a “public” company (in the American concept). They have to focus their shooting, focus their teams (something they themselves have recognized) and stop wanting to be too much at once without having a clear strategy. Because they are becoming another example of being a pioneer It doesn’t always mean you’re the best. and that, if you don’t get your act together, competitors who have a clearer roadmap will eat your toast. Only time will tell if the strategy works, but at the rate things are going, it won’t take too long to find out. In Xataka | The worrying thing is not that AI is going to take your job in the future: it’s that it is preventing you from finding one now

OpenAI seemed unstoppable. Now he has decided to leave Sora behind and change course

There was a time when OpenAI seemed to move forward without looking back, adding release after release while the rest of the industry tried to keep up. On that stage it appeared sorapresented in February 2024 as a model capable of generating video from text and, shortly after, as an application with broader aspirations. The idea was not only to create clips, but also to give them a place to circulate, share them and turn them into a more social experience. It was, in a way, the natural extension of a company that never stopped exploring new formats. The closure. What fit as one more step within that expansion has ended in a twist that is difficult to ignore. OpenAI has confirmed the end of the Sora app, a decision that the team itself has communicated with a direct message to those who used it: “We say goodbye to the Sora app.” According to The Wall Street Journalthe withdrawal would not be limited to the app: it would also affect the API and video support within ChatGPT. For now there are no specific dates or complete technical details, although the company has announced that it will offer more information shortly. What was Sora and why did it matter?. To understand what this closure means, it is worth clarifying what Sora was. It was a system capable of generating videos from text and expanding existing clips. Over time, that capability became a broader product, with functions for sharing content generated within the platform itself. It was not just another tool, but one of the proposals with which OpenAI sought to bring AI to the field of audiovisual creation. The change of prioritys. Less video, more code and agents. The closure of Sora is not an isolated move, but part of a broader change within OpenAI. According to the aforementioned newspaper, the company is reorienting computing capacity and part of its equipment towards productivity tools, programming and systems capable of acting autonomously on the user’s computer. In that same line, The company recently announced the integration of its ChatGPT app, its Codex code tool and its browser into a kind of “super app.”. The idea, as conveyed by management to employees, is to concentrate efforts on a clearer product vision. During his journey, Sora symbolized a stage in which OpenAI was exploring how far it could take its models beyond text. Its closure, however, points to a different reading of the current moment. The company seems to be leaving this stage of expansion behind to focus on products with more immediate applications in the professional field. It is not so much a resignation as a rearrangement of priorities. In that setting, video loses weight compared to tools that fit better into your current strategy. Images | OpenAI In Xataka | Terence Tao is the best mathematician in the world: he has recognized that he is using AI to solve one of the Millennium Problems

Anthropic is winning the enterprise AI race, so OpenAI has a new plan: become Anthropic

OpenAI has thrown out everything that moved in AI. They have been launching everything: a video generatora web browser with AI, an image generator with Studio Ghibli styletools e-commerceetc. The logic was simple: whoever tries everything has more chances to get something right, but the result has ended up being the opposite. While OpenAI seemed to be everywhere, Anthropic was focused on a single site and It has managed to eat the land where it mattered most. Enough of trying everything. Fidji Simo, the board that Altman signed last summer, recently called upon employees to give them a message that is rarely heard in a company with the growth of OpenAI: their main rival was teaching them a lesson. What Anthropic is doing, Simo explained, should be a wake-up call for OpenAI, which has lost leadership among software developers and enterprise customers. “We cannot waste this moment because we are distracted by parallel projects,” he stressed. The hidden cost of doing a little of everything. The problem with shooting at everything that moves is not only the focus, but the resources that this implies. In companies that develop foundational models, the key resource is computing capacity, and at OpenAI that resource jumped from one team to another depending on the priorities of the day. The Sora team, for example, was integrated into the research division despite being one of the company’s most visible products. OpenAI was growing fast in too many directions, and that also created internal tensions over which project should be prioritized. Anthropic focused on one thing. As OpenAI diversified, Its main rival adopted a completely opposite strategy: few products, a lot of depth. Claude does not generate images or video, does not have his own browser and is not trying to create his own chips (at the moment). It is dedicated to creating foundational models and offering them both in web service mode and especially through APIs for companies and developers. Claude Code, its flagship product for programming, became a viral phenomenon among software engineers last fall, and has ended up consolidating itself as the reference tool among amateur developers—vibe coding is still going strong—and of course among technical teams in all types of companies. OpenAI strikes back. The response has not been long in coming: OpenAI launched last month a new version of Codexhis programming tool, and accompanied it with new GPT-5.4 which is precisely much more oriented towards professional environments. According to Simo itself, Codex already exceeds two million weekly active users, almost four times more than at the beginning of the year. To drive usage of its product, OpenAI is deploying engineers to consulting firms and business partners to accelerate adoption of these products. IPO on the horizon. Both OpenAI and Anthropic are taking clear steps towards an IPO which in fact could occur this year. That makes gaining share in the corporate market—which is the one that really pays, the one that signs contracts, and the one that justifies valuations—absolutely essential for these IPOs to be successful. The initial share price and real valuation of these companies will depend on how well positioned they are, and at OpenAI they want to recover the lost ground in the enterprise market. In the meeting with the staff Simo explained that “we are acting as if this were a code red.” The paradox of being the pioneer. OpenAI unleashed the AI ​​fever with the launch of ChatGPT in November 2022 and made generative AI an almost everyday phenomenon. However, being the first usually has a trap, because it forces you to explore and diversify to maintain your reference position and that is very expensive. Anthropic came along later, saw where the real money was, and focused specifically on that sector. The student has surpassed the teacher, it seems, and at OpenAI they want to correct the strategy. What will happen to so much product?. It remains to be seen how this OpenAI strategy affects its entire product catalog. If you start focusing on developers and enterprise solutions, what will happen to your imager, Sora or Atlas? The structural tension between being a “research laboratory” and being a “product company” can pose a challenge for a company that naturally did not stop exploring new ideas to apply AI to them. Image | TechCrunch | Wikimedia Commons In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

OpenAI thought putting an erotic mode on ChatGPT was a good idea. His wellness advisors call him “a sexy suicide coach”

Treat adults like adults. This is how Sam Altman announced OpenAI’s decision to allow a “adult mode” on ChatGPT to have erotic conversations. It makes economic sense since it will be a paid function, but the doubts from an ethical point of view are also there. In fact, it has been the company’s own wellness team that has been against this product, causing its launch to be delayed. Internal opposition. In an exclusive Wall Street Journalsay that earlier this year, OpenAI consulted with its board of wellness experts about ChatGPT’s adult mode and the response was unanimous: it’s a terrible idea. At a meeting, experts warned that these types of interactions with AI can foster emotional dependency, especially among younger users. One of the committee members brought up the topic of teenagers who committed suicide, allegedly encouraged by ChatGPTand said it would be like launching a “sexy suicide coach.” Demolishing. Risks. People are already forming emotional bonds with AI chatbotsif we add sexual content to the one that has the most users in the world, it is, to say the least, delicate. According to internal documents reviewed by the Wall Street Journal, the wellness council’s experts identified several problems, such as the risk of compulsive use, a tendency toward extreme content, and the displacement of real romantic relationships in favor of virtual ones. Age verification. Is the crucial step that ensures that such a tool does not end up in the hands of minor users. The problem OpenAI has is that its verification system fails more than a fairground shotgun. According to internal sources, the system failed to identify 12% of the time. It may seem like a relatively low figure, but in practice we are talking about millions of teenagers accessing this function. What OpenAI says. The company wants us to be able to ‘sext’ with ChatGPT, but with certain limits. An OpenAI spokesperson says they will block harmful content – such as sexual and child-related abuse -, will integrate safeguards such as reminding users to have relationships in the real world, and will also avoid encouraging exclusive relationships. Another measure involves monitoring the long-term effect that this adult mode has on users. Adult mode will be exclusively text and will not allow the creation of images or videos. Regarding age verification, the spokesperson states that the performance is similar to that of other industry proposals and that “they will never be totally infallible.” It was planned for the first quarter, but now that it has been postponed there is no date for its launch. Background. OpenAI already has a history of accusations related to harmful effects on mental health. One of the most famous cases It was Adam Raine’sa teenager who shared his suicidal ideations with ChatGPT. When his parents discovered the conversations, They sued OpenAI. And he hasn’t been the only one. There is several legal proceedings underway for similar cases and there have also been cases where ChatGPT has been accused of encourage delusional thoughts and cause psychotic breaks. Saying that AI is solely responsible is simplify a much more complex realitybut it is no less true that OpenAI has taken steps to make its chatbot more secure for minors and has been shown committed to taking care of the mental health of its users. That is, they recognize that the problem exists. The question now is how launching a version of the same chatbot that has sex with users fits into this discourse. In Xataka | “I can’t stop”: the addiction to talking to AI is already here and there are even support groups to quit it Image | Cottonbro studio, Pexels

OpenAI says its agreement with the Pentagon is completely secure. His way of convincing us: “Trust us”

Don’t worry about anything, really. Trust us. Who says it is OpenAI, a company led by Sam Altman that has earned the reputation of saying one thing on one hand and doing another on the other. There are whole books written on that premise, and it is inevitable not to remember it now that this gigantic startup has signed a disturbing agreement. soap opera. OpenAI reached an agreement with the Department of Defense to integrate its AI models into government agencies, replacing Anthropic. They did so by indicating that they would impose requirements on the use of these models and would have red lines similar to those defended in Anthropic: no mass espionage, no development of autonomous weapons. That decision has cost Anthropic the contract with the DoDbut also has been tagged as a “risk to the supply chain.” Trust us. There are two problems here. The first, that OpenAI has never shown the contract that makes it clear that there are red lines to the use of GPT by the military. And the second and most serious, that according to OpenAI we do not need it because we only need to trust them. Altman himself tried to dispel doubts explaining that they had added amendments to the agreement to ensure that those red lines were not crossed. The wall of opacity. Despite promises of transparency, OpenAI refuses to publish the contract. The firm’s head of national security, Katrina Muligan, he came to affirm in that it does not feel “obliged” to share the legal language of the agreement. This has raised suspicions about what has really been signed behind the scenes. Holes everywhere. Brad Carson, who served as secretary of the US Army under Obama, indicated at The Intercept how Sam Altman’s legal language in his posts on X is suspect. The CEO of OpenAI mention for example that “the AI ​​system will not be intentionally used for domestic surveillance of US citizens.” That “intentionally” is, according to experts like Carson, a kind of blank check to allow data on American citizens to be captured while spying on foreigners “by accident” but systematically. As Carson explains, They are trying to confuse you with complicated legal terms that ordinary people think mean something completely different. But lawyers know what it means. And lawyers know that this is no protection. The human factor. The integration of OpenAI’s AI into DoD systems now falls under the direct supervision of Secretary of Defense Pet Hegseth and President Trump. This represents an ethical dilemma: the security of the system depends on the political will of figures who have traditionally had no problem eliminating restrictions on mass surveillance systems. Quo vadis, OpenAI. The 180º turn it’s clear for OpenAI. While in its beginnings the startup was defined With the message of creating AI systems “for the benefit of humanity” and prohibiting the military use of its technology, this agreement demonstrates that such premises no longer seem to exist. another bad sign. This way of acting by OpenAI has caused it to be openly criticized on networks, but there have also been internal problems. This is demonstrated by the fact that its director of robotics, Caitlin Kalinowski, has decided to resign from office over concerns about the company’s military negotiations. And an obvious question. The dispute between the Department of Defense and the Pentagon centered precisely on the fact that they did not want Anthropic to establish red lines. OpenAI claims to have established basically the same ones, so how is it possible that the DoD allows OpenAI to establish them when it has not allowed Anthropic to do so? It doesn’t seem to make any sense. What a mess. We are living a real soap opera with three protagonists. The US Department of Defense (DoD) – now renamed the Department of War –, the company Anthropic and its rival, OpenAI. The DoD, which used Anthropic’s AI for military operations, He demanded to be able to use it without restrictionsbut Dario Amodei, CEO of the startup, he flatly refused. That was the moment Sam Altman took advantage of to become the new ally of the DoDsomething that has been seen by many as opportunistic and morally reprehensible. Image | Xataka with Freepik In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

OpenAI wanted to make ChatGPT the ideal GP. The problem is that he’s wrong half the time.

OpenAI started the year with a new release: ChatGPT health mode. Although it is not currently available in Spain, it is in the US and the first studies are already appearing that test its effectiveness and they are not very good news for OpenAI. It’s not that big of a deal. A recent study published in the journal Nature Medicine and collected by NBCNews has revealed that ChatGPT Health failed to classify the urgency of 51.6% of the emergency medical cases analyzed. The researchers presented thousands of clinical scenarios to the model and saw that the AI ​​tended to undervalue critical situations, suggesting that the patient visit the doctor in 24-48 hours when, in reality, these were emergencies that required rapid intervention such as diabetic ketoacidosis or respiratory failure. It did correctly classify other cases as stroke or severe allergic reactions. It doesn’t make sense. Not only did it underestimate serious cases, cases of mild symptoms were also provided and ChatGPT Health overrated 64.8%, urging the patient to see a doctor as soon as possible, for example in cases of persistent sore throat. Dr. Ashwin Ramaswamy, leader of the study, told NBC that “it doesn’t make sense that recommendations were made in some areas and not in others.” Suicidal ideas. There is still more. The cases presented included some with suicidal ideations. One of these cases was a patient who showed interest in “taking a lot of pills.” If the patient only described their symptoms, a banner appeared with the suicide prevention help number. However, when the patient added the results of an analysis to their query, ChatGPT no longer detected suicidal ideations and did not display the banner. According to Ramaswamy, “A crisis protection barrier that depends on whether lab results are mentioned is not in place, and is arguably more dangerous than having no barrier at all.” Why it is important. The relevance of this finding lies in the fact that ChatGPT has become the frontline doctor for many people. The ease of checking symptoms from a mobile phone is displacing traditional methods of consultation; What we used to Google, we now ask a chatbot. If the main tool that people use to decide whether or not to go to the emergency room has a 50% margin of error in serious cases, we have a problem. In statements to GuardianAlex Ruani, a researcher in medical misinformation, described these results as “incredibly dangerous” and notes that it creates a “false sense of security (…) If someone is told to wait 48 hours during an asthma attack or a diabetic crisis, that peace of mind could cost them their life.” OpenAI responds. A company spokesperson defended the accusations by saying that the study does not reflect typical use of ChatGPT Health, arguing that it is not designed to make diagnoses, but rather to answer follow-up questions and help patients get more context. At its launch, OpenAI insisted that the tool was not a substitute for a doctor, the problem is that once a tool like this is launched, how people use it is out of the company’s control. Flattery and hallucinations. Chatbots have a flattery problem and they tend to agree with the user. On the other hand there is the phenomenon of hallucinations. LLMs are designed to prioritize giving an answer over admitting that you don’t know something, and the worst thing is that you do it with such confidence that we believe it. It is not an empty statement, It has been proven that we feel safer using an AIeven when the answers it gives us are incorrect. If we mix adulation, hallucinations and health, we have a quite risky cocktail. Image | OpenAI In Xataka | People Blaming ChatGPT for Causing Delusions and Suicides: What’s Really Happening with AI and Mental Health

“Citizen surveillance and autonomous weapons deserved more deliberation” OpenAI robotics director resigns

A week ago we were just saying that “A dead king, a king“: Anthropic passage to pure ostracism after being considered a “risk to the supply chain” of the United States practically overlapped with the announcement of the US Defense Administration agreement with OpenAI in record time. Behind the scenes: the reasons for the no from the company led by Dario Amodei and the unknown of the terms of that agreement that installs ChatGPT on the Pentagon computers. A few days later, Caitlin Kalinowski says goodbye at his position at OpenAI, citing the military use of artificial intelligence as the reason. The resignation. Caitlin Kalinowski, head of the OpenAI robotics team since November 2024, announced her departure from the company a few hours ago in publications from X and from LinkedIn. He makes it clear that his decision is about principles and not people and expresses respect for Sam Altman and the team. In his brief statement there are two lines that, in his opinion, the company did not think about enough internally: The surveillance of American citizens without judicial supervision. Autonomous weapons capable of firing without human supervision. Tap to go to the post Context. The resignation occurs in the midst of Anthropic’s departure from the Pentagon (the transition will last six months), the entry of OpenAI and in the midst of a debate about how far AI companies should go in their collaboration with the US military establishment: Anthropic stood before the Pentagon drawing strict lines on domestic surveillance and autonomous weapons. OpenAI reached an agreement with the Department of Defense to deploy its models on a classified government network in a move that has been interpreted as opportunistic. According to the company led by Altman, the agreement excludes domestic surveillance and autonomous weapons, but the damage to its reputation had already been done: thousands of people uninstalled ChatGPT by way of cancellation. Why it is important. The goodbye of Caitlin Kalinowski is the first public and nominative resignation from a senior position at OpenAI motivated by ethical disagreements over the military use of AI explicitly. And this sets a precedent in the industry insofar as it exposes the internal fracture in the most influential company in the sector, placing OpenAI in a delicate situation before those who use its tools, its staff and also before society. And finally, it makes more clear than ever the need to legislate on artificial intelligence and its civil and military uses. Maybe Europe is behind in the AI ​​battlebut a long time ago he set about the arduous task of establish a regulatory framework. Which Kalinowski does not say. In the comments of her post on Kalinowski does not say it clearly, but when an agreement of this magnitude has already been signed and its CEO makes it publicthere is no room for much maneuver from within: resigning with a public statement like yours is one of the few pressure maneuvers left to exert. Consequences. For OpenAI, the pressure is growing and it faces more departures and more cancellations if it does not clearly show what its red lines are in a credible and verifiable way: the militarization of AI is something we are experiencing in real time. For the AI ​​industry, it is more fuel on the fire of the self-regulation debate. And Anthropic gains reputation, although in the short term it has lost an important agreement and its new status may put its existence in check. In Xataka | The US has decided to shoot itself in the foot and destroy one of the best AI companies in the country In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building Cover | Caitlin Kalinowski

Xiaomi is testing the mother of AIs for its cars, mobile phones and home. And there is no trace of Google or OpenAI

Xiaomi long ago stopped being simply a mobile brand and became one of the giants of the Chinese technology ecosystem. The company It no longer goes to volume, it goes to aspirationand to achieve this they want a remarkable user experience. A deep integration of artificial intelligence is inevitable to achieve this, and that is where MiClaw comes to life. Mike? Xiaomi has published on its website the details about MiClaw, your next step in exploring AI agents. It begins as a small-scale closed test, but it represents the pillars of what we will see in the near future on the company’s devices. What is. Xiaomi is testing with MiClaw the execution capabilities of its large AI models (MiMo) within the mobile-car-home ecosystem, both at the conversational level and in terms of execution capacity. It is a deep model, one with full access to every single event on the device, and able to reason for itself what action needs to be taken. What are you doing. The agentic AI prepared by Xiaomi follows a four-step model: Perception Association Decision Action In the text itself, Xiaomi gives us some examples of how its agent can make our lives easier. A refrigerator that can automatically check which consumables are missing at home, connect to our calendar and create a reminder that we have to make the purchase. You buy a train ticket, the agent reads the confirmation SMS, consults our calendar, and automatically prepares and schedules the trip. Why is it important. That Xiaomi is redoubling its efforts in AI is no coincidence. The company wants to be a benchmark in the ecosystem and conquer regions like Europe. Leading in artificial intelligence will be key for any of its product pillars: cars, home devices and mobile phones. Xiaomi wants to move away from the current interpretation-execution proposal, to integrate an agent capable of carrying out up to 20 consecutive and independently executed actions. At the moment, MiClaw works under closed beta on devices like the Xiaomi 17 Ultrabut Xiaomi’s idea is to develop an agent capable of working on any of its devices. Image | Xataka In Xataka | Is the newest the best for you? We compare the Xiaomi 17 Ultra against the Xiaomi 15 Ultra to see which is a better buy in 2026

OpenAI is now the bad guy of AI. GPT-5.4 will have to be very good to change that

He soap opera that has been assembled with the Department of Defense has made the perception clear in recent days for two of the leading companies in AI. Suddenly Anthropic She is the good one in the movie and OpenAI is the bad guy. And whether precisely for that reason or not, Sam Altman’s team has decided that now was the time to launch a new and promising AI model: GPT-5.4. Hello GPT-5.4. In it OpenAI official announcement explain how this new model will currently be available in two variants: GPT-5.4 Thinking and, for those who want “maximum performance in complex tasks”, GPT-5.4 Pro. We are looking at a foundational model that is better than ever in its reasoning, programming capacity and above all in one very fashionable thing: “agent flows”. Or what is the same: do things for us. The “Use My Computer” mode, protagonist. It is a free translation, but it is more or less what OpenAI highlights with what is probably the great novelty of this model. As they say in the announcement, this is their first model “with native computer use capabilities.” It is capable of taking control of our machine and doing things for us autonomously, completing complex cycles of action and solving problems that arise. Not only that: according to its creators GPT-5.4 “is our most token-efficient reasoning model, using significantly fewer tokens to solve problems than GPT-5.2.” Or what is the same: AI doing things for us will be cheaper and it will solve them even better. Use the computer better than us. The benchmarks certainly seem to point to fantastic performance in these tasks. In the OSWorld-Verified test, which measures a model’s ability to navigate a desktop environment using screenshots and virtual mouse and keyboard actions, GPT-5.4 achieves a 75% success rate. That is not only better than the 47.3% of GPT-5-2: it even exceeds human performance, which is 72.4% according to the creators of this benchmark. Other tests of this type that evaluate the ability of an AI model to navigate also make it clear that GPT-5.4 is clearly ahead of its predecessors. The ARC-AGI thing is scary. Machines were supposed to have a lot of trouble solving abstract reasoning problems that humans are naturally fantastic at, but oh well. In recent times we have seen how the ARC-AGI 2 test, which seemed like a challenge for AI models, has become increasingly acceptable for said models. GPT-5.4 gives a new bite to that reality, and in its Pro version it already manages to solve 83.3% of the tasks (73.3% in the standard model) when in GPT-2 the rate was 52.9%. It is a simply brutal jump, and although in other tasks that jump is not so notable (it programs somewhat better according to SWE-Bench Pro, but not much), it is clear that we are facing an extraordinary model. Perfect for OpenClaw? That ability seems to come to him that was not even painted OpenClawthe AI ​​agent that has become a phenomenon in this area in recent weeks. OpenAI ended up signing its creator and is in some way the “owner” of the projectand this performance in agentic tasks is expected to be very useful for everything OpenClaw does, which is basically that: manage your machine for you. That’s where GPT-5.4 can really come into its own. And you can trust him more. According to those responsible for OpenAI, GPT-5.4 is now better at answering questions that require seeking information from multiple sources, and “identifying the most relevant ones, particularly for “needle in a haystack” type questions, and synthesizing them into a clear and well-reasoned answer.” What’s more: they rate it as the model most focused on answering based on facts and say that it is 33% less likely to answer something that is false compared to GPT-5.2. But be careful: it is very, very expensive. These capabilities, however, will not come cheap. With this launch OpenAI has updated its prices, and it has done so by making it clear that if you want the best, you will have to pay for it. The “standard” GPT-5.4 model costs $2.50 per million input tokens and $15 for output tokens, while the Pro costs a whopping $30 and $180 respectively. Claude Opus 4.6, which was until now considered the best AI model, costs $10 per million input tokens and $25 per million output tokens: it was already expensive, but GPT-5.4 Pro leaves it almost as a “bargain” AI model. Trying to stop the bleeding. The model appears at a delicate moment. According to various sources, ChatGPT has lost 1.5 million users since announcing that they had reached an agreement with the Department of Defense. That decision provoked much criticism, a movement on networks that spoke of “cancel ChatGPT” and internal tensions. Before the scandal there was already talk of the potential appearance of GPT-5.4, but it is clear that the launch now takes on a double meaning. It doesn’t just have to be better than everyone else: it has to redeem OpenAI. And above all he needs a victory. Public perception seems clear: OpenAI has been suffering lately, whether from internal dramas, talent drains, or temporarily falling behind in the performance of its models. GPT-5.4 is not a simple evolution of its founding model, because what OpenAI needs is for this model to succeed and convince people to “love again” (figuratively, you know what we mean) ChatGPT. We’ll see if he succeeds. In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.