AI already knew how to create images. OpenAI says it has found the missing piece with the new ChatGPT Images 2.0

Over the last few years we have seen image generators become increasingly more spectacular, faster and also more popular. The problem is that a striking image is not always useful to work with. It is one thing to ask for an astronaut cat and quite another to obtain a usable marketing poster, a coherent vignette or a graphic that respects what we have asked for. That’s where OpenAI now wants to move the conversation with its new model: not so much towards the pretty image, but towards the useful image. The answer. What OpenAI proposes goes in that direction. The company led by Sam Altman He maintains that his new model is not only created to generate attractive images, but to solve visual assignments with more intention and less trial and error. In the presentation he went so far as to state that “images are a language, not decoration”, a fairly clear way of summarizing where he wants to take the product in a present with quite a bit of competition. The thesis is that: that asking for an image in ChatGPT It’s less like launching a creative prompt and more like commissioning a piece that we can actually use. The missing piece. If the firm wants us to talk about something more than showy images, it had to improve exactly the points where these models usually fail. Here they promise important changes on three very specific fronts: following complex instructions more precisely, better organizing elements within the image and reproducing dense text with greater reliability. In other words, we are not only looking for more beautiful results, but also less ambiguous and more controllable ones. Think before you draw. One of the novelties that OpenAI tries to highlight most strongly is that this is its first image model with reasoning capabilities. Translated into practical terms, the company maintains that, when a model with “thinking” is chosen within ChatGPT, the system can take more time, structure the task better, rely on the web to search for updated information and review its own results before delivering the image. And we have tried it, asking for the image of two people walking along Gran Vía, in Madrid, near Cines Callao, and some notes on activities to do in Spain during May. These are the images that we can see in the cover image. The keys. OpenAI talks about game prototyping, storyboards, marketing creatives, comics, social graphics and other materials where both content and form matter. To sustain that ambition, the company says it has improved on two delicate fronts: the handling of non-Latin text, with advances especially in Japanese, Korean, Chinese, Hindi and Bengali, and the more faithful reproduction of very marked visual styles. It also expands the possible formats, with proportions of up to 3:1 and 1:3, resolution of up to 2K and, in certain modes, the possibility of generating up to ten images within the same request with continuity between characters and objects. The competitive context. This announcement also cannot be read as if OpenAI had suddenly discovered a new market. Midjourney has already become a clear reference for works with a strong artistic charge, Nano Banana has attracted attention for its conversational editing capabilities and FLUX 2 has become strong in photorealism. With that board in front, the company seems to be looking for another angle. Rather than contesting each terrain separately, it tries to present ChatGPT as an environment where the image is not generated in isolation, but as part of a broader flow, something that on paper can be attractive if it really delivers what it promises. It’s already starting to unfold: One of the keys to the announcement is that OpenAI ensures that the model does not remain in the showcase phase, but is beginning to reach a product. The company places its deployment in ChatGPT for all users, including Free and Go, and associates the most advanced results with Plus and Pro, as also reported by Engadget. Additionally, it takes you to the API and Codex, a sign that they don’t want to limit it to casual use within the chat. If your strategy involves turning the image into another work tool, it made sense for the deployment to start precisely there. Images | Xataka with ChatGPT Images 2.0 | OpenAI In Xataka | Amazon wants to win the AI ​​race at any price. That is why it has invested both in Anthropic and OpenAI

chatbot is not working and OpenAI says it is investigating an issue

If you are a user of ChatGPT and this afternoon you wanted to ask the chatbot something, you’ve probably been left without an answer – it’s time to use your brain again. It is that the famous service powered by artificial intelligence (IA) has been giving errors for several minutes. OpenAI, for its part, has launched an investigation to understand the origins of the problem. The outage entered the scene around 4:00 p.m., preventing users from around the world from using ChatGPT normally. As we can see in the screenshot, the chatbot refused to respond, offering error messages such as ‘Hmm… something seems to have gone wrong’, which in Spanish means ‘Mmm… something seems to have gone wrong’. In development. Images | Solen Feyissa In Xataka | What is Cloudflare, how it works and why a crash or block causes half the Internet to fail

The US has appointed executives of Meta, Palantir and OpenAI as lieutenant colonels. We have many questions

On June 13, 2025, four executives from some of the world’s largest technology companies donned the uniform of the United States Army at Myer-Henderson Barracks, a ten-minute drive from the Pentagon. After taking the oath, They were appointed lieutenant colonels of the Reserve. The appointment was controversial, but it was made on the occasion of the launch of Detachment 201, a very special army body dedicated exclusively to military innovation. Technological military with a wink. The four new reserve lieutenant colonels are Shyam Sankar, CTO of Palantir, Andrew Bosworth, CTO of Meta, Kevin Weil, CPO of OpenAI and Bob McGrew, advisor to Thinking Machines Labs (Mira Murati’s startup) and former head of research at OpenAI. The name of Detachment 201 is a wink to Silicon Valley, because an HTTP 201 status code on the web means that a resource was successfully created. All four will continue in their current positions while serving as reservists. Sankar’s thesis. Palantir’s CTO has already become a reference in the discourse on how to apply technology to military institutions after publishing on its website 18theses.com the document “Defense reform”. In it he talked about how “warriors fight with weapons and with git.” He criticized the Department of Defense (DoD) for treating technology as “expensive and unaffordable,” and proposed using AI to make military assets work more efficiently and quickly. The germ. The project was conceived by Brynt Oameter, who was responsible for talent management at the Pentagon. His idea was to attract technology experts so that they could take up positions in the Army when necessary. He met Sankar at a conference in early 2024 and began discussing the idea, which ended up crystallizing into a project that Donald Trump promoted. Finger designations. A curiosity: among that group of chosen ones there were no Anthropic executives even though the company was the one that ended up being the chosen one in July 2025 to integrate its AI model, Claude, into Pentagon systems. Then, how do we know, things changed. On Wired they explain how Sankar was the one who volunteered to be part of the project, but also recommended the three people who would end up forming that group with him. What will these four managers do in the Army?. The official mission of these experts is to integrate specialized knowledge in AI, software and data analysis into the Pentagon’s strategy. parameter gave an example: The commander of the Indo-Pacific region is evaluating threats in the Far East for the next ten years and has asked Detachment 201 to explain how AI can affect security in that context. These new lieutenant colonels can also operate more tactically, advising on how soldiers can use the new tools at their disposal. Or what is the same: they will act as consultants to the US Army, but in uniform and having taken the oath, something important because the relationship with the soldiers changes. The inevitable conflict of interest. The Army affirms that there is no conflict of interest because the members of Detachment 201 will not have a vote in the contracts signed with the private sector. The chronology of events tells us otherwise, however: A month before Bosworth took office, Meta announced an agreement with Anduril to develop military augmented reality products. A few months before OpenAI announced an alliance with Anduril in air defense systems. Palantir, Sankar’s company, signed a contract with the Army worth 480 million dollars in December 2024. That doesn’t prove anything, but suspicions are inevitable, because even if they don’t have a vote, they will be able to obtain internal knowledge and data that inevitably benefits their employer companies. But weren’t there going to be limits on AI in the army? Another of the thorny questions that arise from this Detachment 201 is how the recommendations of these experts will be applied on the battlefield. OpenAI theoretically has policies prohibiting its AI models from causing harm or developing military weaponry. However, the explicit mission of this body is to make the US Army be “more lethal”. That contradicts OpenAI statementswhich after allying itself with the Pentagon recently stressed again and again that its models would be used within limits… which is exactly why the Pentagon ended up wanting to turn Anthropic into a pariah company. Two weeks to get the rank. A conventional lieutenant colonel reaches that rank after between fifteen and twenty years of active military career. The members of Detachment 201 received that same rank after two weeks of partially online training that included physical conditioning, shooting as a diagnosis and basic notions of military protocol such as the rank structure and the use of the uniform. They did not complete basic training and have the flexibility to fulfill part of their 120 annual hours of service from home, something not offered to other reservists. All of this has generated reviews within the Army and also comments of all kinds on social networks. Image | DVIDS In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

OpenAI swore that ads on ChatGPT were its “last resort.” Now they are your survival plan

a couple of years ago Sam Altman said that placing ads on ChatGPT was “the last resort for our business model.” Well then, ChatGPT ads are here and OpenAI is sure that it will be the business of the century, one that will generate a whopping $100 billion. what has happened. He leaked it Axios; During a presentation with investors, OpenAI has confirmed its forecasts for the newly released advertising model in ChatGPT. During 2026 they expect to generate 2.5 billion dollars and this will increase in the coming years until reaching 100 billion in 2030. This is the progression they project: 2026: 2.5 billion 2027: 11,000 million 2028: 25,000 million 2029: 53,000 million Why it is important. Advertising has gone from being the last resort of its business model to directly being its business model. OpenAI is losing money at an unsustainable rate and has been making profound changes to be more profitable, such as focus more on enterprise customers, but it may be too late. Advertising is your way to profitability. In other words, your survival depends on this going well. butterfly effect. If it works for them and they achieve their goal, it can change the rules of online advertising. 100,000 million is many millions, enough for Google and Meta’s business to end up being affected. Furthermore, advertising within a chatbot like ChatGPT can be much more profitable because the user says in a much more direct and detailed way what they are looking for. On the other hand, advertising on Instagram or Google Ads requires work to collect data to guess the user’s tastes. If it doesn’t work for them, the outlook looks bad for the technology sector. We talk about the most valuable private company in the world and Its possible bankruptcy can cause a domino effect that freezes investments and punctures the expectations placed on AI. Users. To achieve these numbers, OpenAI estimates that it needs its weekly user base to reach 2.75 billion by 2030. Right now ChatGPT has 900 million weekly active usersthat is, they have to triple them in four years. We talked about ChatGPT having to be at the level of WhatsApp or YouTube. There is already 6 billion people with internet accessAs far as there are users, the question is whether it is feasible for OpenAI to attract almost half of them. The mass adoption of AI is already in a more mature phase and, although it is the most used, ChatGPT is no longer the pretty girl; Now it coexists with equally capable competitors and most importantly: The image of the company has been eroding. The double edge of advertising. Advertising can be tremendously lucrative for OpenAI, but it puts user trust at risk, and that is just what they need to fulfill their plans. We have normalized seeing ads everywhere, but having them appear in a conversation with a chatbot threatens to erode their main promise: to be assistants that respond solely to the user’s interest, and not to the commercial priorities of those who pay to advertise. Two things can happen here: that the rest of the AI ​​companies jump on the bandwagon and we normalize that the free versions have ads (the ideal scenario for OpenAI), or that OpenAI is left alone and people end up going to other ad-free chatbots. Anthropic said it would not advertisewe will see if in a few years they continue to maintain it. Image | Xataka In Xataka | Before, advertising was to monetize. Now it is to punish you and YouTube has taken it to the extreme

The AI ​​industry fell in love with OpenAI, but doesn’t trust its CEO one bit

At OpenAI they see a future in which the work week should have four days. Not only that: every citizen should receive a share of the economic growth generated by AI. These are some of the proposals that the company has published yesterday with the aim of preparing us for the “age of intelligence.” And just the day they published that proposal full of good and reassuring intentions, a blow arrived for the CEO of OpenAI, Sam Altman. An investigation published in The New Yorker once again called into question his way of acting, highly criticized by experts and engineers who worked with him. The conclusion of all of them: better not trust Sam Altman. The arrival of the age of intelligence. What they call the “age of intelligence” will undoubtedly have a negative impact in some areas, but OpenAI proposes with their document to make changes that mitigate these problems. Among the most striking measures is the creation of a “public wealth fund” that will distribute dividends from AI directly among citizens, regardless of their employment status. Let the machines work (and pay us for it). They also suggest taxes on automated labor to finance social security, and also pilot projects of four-day work weeks without salary reduction. The proposal is striking and seeks, of course, to reassure citizens in the face of threats such as job loss that can be caused by the mass adoption of AI. The problem is that this proposal comes at a delicate moment for an OpenAI in the midst of a reputational crisis. Smokescreen? This optimistic proposal contrasts with the report published in The New Yorker and in which the authors interviewed more than 100 people “with first-hand knowledge of how Altman behaves in business.” And among them, rivals like Ilya Sutskever or above all Dario Amodei who founded their own startups. Both harshly criticized Altman. Sutskever accumulated internal documents and messages showing deception and manipulation. Amodei stated that the obstacle to AI security is Altman himself, who leaves that area in the background compared to the company’s ambition for personal power and excessive growth. For his former partners, Altman is not a visionary, but an actor with a calculated pose. Says one thing, does another. The scandal of dismissal and later return of Altman was due precisely to that attitude in which the council accused him of having “not been consistently frank in his communications.” It’s the same thing we’ve read on other occasions: Altman has a dual personality. In him, the pathological desire to be liked and accepted is mixed with a total lack of concern for the long-term consequences of his misdeeds. He tells his interlocutors what they want to hear, and then does what he really wanted from the beginning. It is something that, for example, Karen Hao narrates over and over again. in his book ‘Empire of AI’in which, it must be said, it erred in calculating the water consumption of data centers mentioned in its studies. In the report they mention how the well-known programmer Aaron Swartz met him before die in 2013 and commented about him even then that “he is a sociopath.” Public image is everything. The publication of the OpenAI document occurs at a particularly critical time for the company, which is involved in a reputational and strategic crisis. Anthropic has managed to become the darling of the AI ​​industry —without being much less perfect— and OpenAI has realized that it was experimenting with too many AI applications that were not profitable and now wants to refocus on what makes it profitable. The good intentions shown in the document try to get public opinion on their side just when the company plans its IPO. Learning from the past. Altman’s critics reveal that he is an expert at designing control mechanisms that go up in smoke. Support AI regulations (at least those that favor you) and publicly promotes ethics committees and alignment and security of the AI ​​that in reality later knocks down internally, at least according to those who work with it. It happened when he promised to allocate 20% of the computing capacity to the super-alignment team, and then actually gave up only between 1 and 2% of that capacity. Jan Leike, who was named co-leader of that team along with Sutskever, resigned in May 2024 indicating that “safety culture and processes have been relegated to the background compared to flashy products,” he explained in a thread in X. He ended up signing for Anthropic. Interested reviews. Although Altman’s career at the head of OpenAI –with what happened to the Pentagon as a recent example—reinforces the comments of those who criticize him, it must be remembered that competition in this industry is currently fierce. Many of those who participate in the report are direct rivals and therefore their criticism, veiled or not, is partly self-serving because it harms their competitor. In Xataka | There is a new generation of AI models at the doors and Anthropic has to sell them: “The biggest and smartest”

OpenAI is the most successful company on the planet. Also the one that plans to lose 85,000 million dollars in a single year

Something special is going to happen in 2026: both OpenAI and Anthropic are going public. This will finally mean that individual investors can invest in them and bet on their future with their money. It will be the definitive exam for the credibility of companies that have grown exceptionally in recent years but also They have burned the money as if there were no tomorrow. But be careful, because there is a compelling reality here: they are going to continue burning it in an even more astonishing way. The two sides of the IPO. The Wall Street Journal has had access to the financial documents submitted to investors before the IPOs proposed by both OpenAI and Anthropic. They reveal extraordinarily striking data that have two sides. Amazement and concern with OpenAI. For example, OpenAI has indicated that it will almost double its revenue this year. According to their forecasts, they could become profitable in 2026 if one excludes the cost of training their models (which are stratospheric, of course). But there is the other reality: OpenAI expects to spend $121 billion on computing power in 2028, so even doubling revenue it will lose, attention, $85 billion. No company has ever lost this amount of money and survived, but OpenAI not only promises that it will survive, but that those losses will end up being almost anecdotal. I tell you the truth, but only part of it. Both companies wanted to show two different versions of reality when talking about how they present their profitability. In one, the very expensive model training processes are included, and in others in which these costs are excluded under a heading called “computing for research.” Excluding those costs, OpenAI is on track to achieve a small pre-tax operating profit this year. Anthropic also promises to achieve this if its most optimistic scenario comes true. Excluding the cost of training models, both OpenAI and Anthropic could be “profitable” this year. Source: WSJ. Until 2030, no real profitability. If the costs and investment in model training are included, OpenAI indicates that it will end up being profitable in 2030, a fact that They had already planned a long time ago and that could not hide a forceful reality: the company has not only not stopped spending money until now: it is going to continue spending it, but to an even greater extent with projects like Stargate to the head. Saying that in 2026 they will be profitable if we do not consider training costs is like an airline telling us that it is profitable excluding the cost of fuel. Anthropic, by the way, expects to be fully profitable in 2028. Revenues growing fast, costs even faster. In addition to those training processes, both OpenAI and Anthropic are spending billions of dollars every year in inferencea section that is beginning to be even more important at an operational and strategic level. Currently, these inference costs represent half of each company’s revenue, although inference technology is expected to becomes cheaper and therefore the costs too. Here, however, there are two big differences between both companies: OpenAI: most ChatGPT users do not pay to use the service, so OpenAI assumes these inference costs without making them profitable. According to OpenAI, this facilitates adoption and will allow users to become subscribers in the future, something that is not happening too much at the moment. Anthropic: This startup has managed to win over many companies that pay to use their models, and it is evident that the company is absolutely focused on making you pay to use their models if you want to use them. And if not, Tell OpenClaw. Betting on the future. The companies and venture capital funds that have invested billions in OpenAI or Anthropic have made a bet on the future. They have blind faith that these companies will end up taking over the world, so the fact that today they are still not profitable does not scare them… or not enough to withdraw from this expensive race. Both have experienced spectacular growth that serves as an argument for investors. In addition, the growing interest of companies in integrating AI solutions by paying for them has boosted Anthropic and even caused OpenAI to reorganize and change its strategy. Less fireworks and hypemore focus in what makes money. The IPO as a trick to survive. Both companies are going to continue burning money like there was no tomorrow in the coming years, but now they hope that investors will be the ones to sustain their businesses. The amount of money they will need has made even the Nasdaq make things easier: It will allow newly listed companies to join its renowned index more quickly, giving them access to larger capital reserves. Now it will be the public market and to a large extent the individual investor who will decide whether they want to bet on that future or not. A small survey. Would you invest in OpenAI or Anthropic if it went public? It is evident that both companies generate different impressions, and although their strategies and ways of doing things are different, it is clear that this public sale offer is going to be very striking when it occurs. So, it is a good time to find out a little about what you, the xatakeros, think about this financial movement of these companies. Image | TechCrunch | Wikimedia Commons In Xataka | NVIDIA has so much money that it is becoming something different: the largest startup incubator in the world

Each new AI model is the best ever until the next one arrives. Anthropic and OpenAI have turned that into a business

It doesn’t matter what technological product we are talking about, because both the product and how it is sold to you matters. And here making promises and generating expectations is the classic strategy. The next processor is going to be more powerful, the next smartphone is going to take better photos… and of course, the next AI model is going to be (much) better. We are seeing that message constantly in the AI ​​segment, but now it is going further. Anthropic and a curious leak. A group of security researchers they detected a few days ago 3,000 unpublished documents in an accessible Anthropic database. They included a draft of the blog entry that corresponded to the theoretical launch of their next AI model. The striking thing is not so much the filtration itself (whether intentional or not), but what those documents reveal. Mythos goes beyond mere evolution. Or at least that’s what that leaked draft seems to reveal. It describes a model called Claude Mythos—also called Capybara—which would not be a simple improvement on Claude Opus, but would be a level above it. The document says that this model is “bigger and smarter than our Opus models, which until now were the most powerful.” Anthropic signs up for hype. According to this leak, the benchmark scores would be notably higher than those of Opus 4.6 in programming, reasoning and cybersecurity. At Anthropic have ended up confirming the existence of this development, and have described it as “a level change” and “the most capable model we have created to date.” It’s not too surprising a phrase, because it’s basically the same thing they’ve been saying about every new model they’ve released. And even they are scared. In fact, what is surprising in that draft is not the message that it is better, but the warnings that accompany that future presentation. Thus, Anthropic describes Mythos as “currently far ahead of any other AI model in cybersecurity capabilities.” In fact, they warn that this may be the beginning of “an imminent wave of models that can exploit vulnerabilities in ways that far exceed the efforts of the defenders.” Or what is the same: Mythos could be a extraordinary tool for cyber attackers. The actual launch plan is to first offer Mythos to cybersecurity organizations to prepare. We will see if that gives an advantage, if Mythos meets expectations. OpenAI also makes a move. Both Anthropic and OpenAI have been moving in parallel for some time, and now they have done so again. At OpenAI they are preparing their new AI model, codenamed “Spud” (“potato”). Hardly anything is known about him beyond the fact that his pre-training phase has been completed. More relevant is that this model appears just when At OpenAI they have decided to be less OpenAI and more Anthropic. They have abandoned Sora and they are redirecting resources to regain ground where they are losing it. That is, in companies. But the count is not infinite.. These days, users of Claude’s $100 and $200 per month plans began to notice how they used up their limits and token quotas in less than an hour during their work hours. What is happening is that Anthropic is training more powerful but much more expensive models to use and that makes it difficult to serve them. Demand is growing faster than the efficiency improvements that are coming, so according to some analysts, AI companies are adjusting those quotas and in a sense making Their models behave as if they were “dumber” to save. It’s something we’ve seen in the past. hedonic adaptation. The psychologists called hedonic adaptation to the phenomenon by which humans quickly become accustomed to any level of experience, good or bad, and return to our starting emotional state. When applied to AI, this phenomenon explains that this model that seemed miraculous to us six months ago today seems slow and limited, and what six months ago seemed like science fiction is today the minimum we ask of companies. Anthropic and OpenAI have not invented the concept, but they have integrated it into their roadmaps like other technology companies in the past. We mentioned it before: they not only sell what they have today, but (more importantly) what they will have tomorrow. Mythos will be brutal and very expensive. Anthropic’s draft warns that Mythos will be “very expensive to serve and will be very expensive for our customers.” That points to two possibilities. The first is that only users of the Max plans can access some consultations with this model. The second, that a subscription appears even more expensive than that 200 dollars a month so we can leverage Mythos with more leeway. We already had a free AI, a basic paid AI and a high-end paid AI. Now we will also have super high-end AI. In Xataka | The hard landing of OpenAI: after years at the forefront, it is discovering that AI is not won only with memes and hype

OpenAI had to choose between “being the company that has erotic AI” or competing with Anthropic. And he has chosen the obvious

Sam Altman wasn’t afraid to try things. That people want to create Studio Ghibli style images? Forward. AI Videos hyperrealistic? Go for it. A browser with AI? we have it. Wherever I saw an option to add AI, OpenAI added it. But that was before, because these projects are being put on the back burner or directly closed for a simple reason: they are blank bullets. ChatGPT is not going to flirt with you. According to the Financial TimesOpenAI has canceled its plans to launch an erotic chatbot, and now the goal is to focus its resources on its most important products. The decision is partly a response to tensions and internal criticism from employees and investors when offering sexualized AI content. One former employee noted that “AI shouldn’t replace your friends or family; you should have human connections.” Making an erotic chatbot is not that easy. In addition to the social impact, it seems that OpenAI has had to face really complex technical challenges when creating this type of chatbot. Training an AI model to do something that “normal” models were trying to avoid was causing problems. For example, when including data sets with explicit content it was necessary to eliminate illegal behavior, such as bestiality or incest. That adult mode, called “Citron mode” internally, could have required users to prove that they were over 18 years old. Too much risk. The move to launch an “adult mode” of ChatGPT was reputationally risky, and people familiar with the decision have indicated that OpenAI wants to begin a long-term investigation into the effects of explicitly sexual chats and the emotional bonds that this type of interaction can create in users. They point out that at the moment there is no “empirical evidence” about the impact, but for now they are clear. And yet, there is another great reason to cancel it indefinitely. Let’s focus on what makes money. In recent weeks we have seen how the new pretty girl of the world of AI is Anthropic, which with Claude has managed to conquer precisely the market sector that is beginning to generate income decent for AI: the companies. OpenAI had been especially focused on end users, but the steps it has taken to try to convince us to pay for ChatGPT Plus/Pro They don’t quite work. No ads, no shopping. A few months ago OpenAI announced that ChatGPT was already capable of buy things for you with its Instant Checkout. The feature was really promising and proposed a paradigm shift in the rules of traditional e-commerce, but this launch seems to have had much less impact than expected. The decision to place ads during conversations seems not going to make ChatGPT’s revenue skyrocket either, so the solution is becoming clearer: if we have to be like Anthropic, we will be more Anthropic, we imagine Sam Altman is saying. Goodbye Sora… The ads don’t quite work, neither does Instant Checkout, and many other launches have not gone beyond generating a fleeting expectation. It happened with Sora: that OpenAI I abandoned her It is a disturbing sign that the company prefers to completely recalibrate. …hello superapp. Another sign of this reorganization is the fact that OpenAI is preparing a desktop tool that will unify its chatbot, its code platform (Codex) and the Atlas browser. The objective, to create a super app with agentic capabilities, not only oriented to code, but also to productivity. It is not clear if they will launch it as a solution for end users or the destination will be the company, where Anthropic is winning the game. New ‘Spud’ model in sight. In The Information indicated this week that OpenAI had recently completed development of a new AI model called Spud. OpenAI is expected to launch it in the coming weeks, and Altman reportedly told his employees that such a model “can really accelerate the economy.” It is not clear what it refers to (agent capabilities?), but with it OpenAI may be able to regain some of the ground lost with Anthropic. If Anthropic lets him, which we doubt. Image | Universal Pictures In Xataka | Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

OpenAI promised them they would be happy selling hype and memes. Until reality hits

The news of the weekend is Sora’s closure. What was once the platform of the hype Regarding video creation, he says goodbye, leaving agreements behind millionaires with giants like DisneyOpenAI’s promise to be one of the big players in text to video, and doubts about the company’s strategy. The bet on hype. For some time now, OpenAI’s strategy has been to create hype, be the protagonist in the conversation, and wait for the user to assimilate its proposal. The problem? It is a strategy that worked in its initial phases, when OpenAI played practically alone. We saw it with Sora: the launch was the most talked about on networks, television and practically all media. Months after its launch, there was no way to use the app without VPN outside the United States (and in a very controlled way through its app in countries such as Canada, Japan, Korea or Vietnam) and was still in the experimental phase. The closure. Sora hasn’t lasted even two years. It was born in February 2024 and says goodbye in March 2026. What was born as the reference model for video creation remained a half-baked experiment, while Chinese giants or Google itself with their models I see They advanced and landed their models on the plane that really matters: the one that allows the average user to access it. The competition tightens. OpenAI promised them happiness two years ago, when ChatGPT had hardly any rivals and companies like Anthropic were in their early product stages. But photography has changed in just a few months: Claude is becoming, with almost daily iterations, the most complete chatbot (it is already much more than that). Gemini has been starting to eat his toast for a year. China is absolutely unleashed launching spectacular video models like Seedance 2.0. AI solutions are no longer promises and hype: they are rapid and controlled launches, integrated into platforms that any average user can access. If you don’t integrate, you don’t win. Seedance 2.0 has not even been running for three months and already It is beginning to be integrated into editing programs such as CapCut. AIs like KlingAI have been integrated into gigantic platforms like HighsfieldAI for months. Releases that materialize a few days after seeing the light, and that lay tangible foundations for the state of AI in text to video. OpenAI assumed that a minority of professionals would be willing to pay for the more expensive versions of GPT to access Sora. The reality: the competition is managing to create much superior mass-use tools, and OpenAI cannot afford tools like Sora. The money is on the other side. Sam Altman need to redefine the strategy. For the moment, he wants double the company’s workforcecenter everything in one superapp that reduces catalog and he has his eyes on Spud. This is the name given internally to the next great AI model they are preparing, one aimed at making OpenAI finally a profitable company. After years without a fixed direction, and with its rivals eating its toast, OpenAI faces its most complex stage: one in which selling hype is not enough. In Xataka | Sora’s closure is a sign: OpenAI takes a step back in the AI ​​race to completely recalibrate

that OpenAI does not run out of funding

OpenAI’s strategy until now had been to shoot into the air to see if, with luck, a bullet would hit the target. They have finally realized that it was not the way to go and for a few days there have been signs that the company is beginning to define its priorities once and for all. They plan duplicate your template before the end of the year, they want to launch a super app to simplify your catalog and even They have closed Sora 2. The changes are being profound and also affect their own CEO. What is Sam Altman’s role in this new OpenAI? Raise money. They count in The Information that Sam Altman has changed his role within the company. Until now, the CEO directly supervised the safety and security teams, but from now on he will focus on securing more investments, managing supply chains and building data centers “on an unprecedented scale.” Why it is important. This change suggests two things: on the one hand, that Altman would have distanced himself from strategic issues to become more involved in technical or secondary aspects; and on the other, that the situation within OpenAI is serious enough to move it to a role more focused on fundraising. As a consequence of the closure of Sora, OpenAI has lost the agreement it signed with Disney worth 1 billion dollars. Added to this is that recently NVIDIA itself got off the wagon with its 100,000 million. The situation is, to say the least, delicate. Saving mode. OpenAI’s strategic pivot seeks to save both money and computing resources. The closure of Sora has a lot to do with the latter since the app consumed a lot of resources, and it had only been launched in the United States. The team that was dedicated to its development will now dedicate itself to robotics-oriented world simulation. Additionally, the applications division led by Fidgi Simo is now called “AGI deployment” and will primarily focus on commercialization and real-world usage. Spud. That’s what the company’s next big AI model is called internally. According to The Information, the pre-training phase has already concluded and it is expected to be launched in the coming weeks. It’s unclear what capabilities this model will have, but Sam Altman has told employees that it “can really boost the economy.” Once again, it confirms that the strategic shift points in the direction of the desired profitability. AI as a consumer product. Throughout 2025, Open AI launched many very different products that added to those they already had, which were not few. With Sora 2 They wanted to be a social network, with ChatGPT Atlas a browser, there are plans for a sex mode on ChatGPT… Until now, OpenAI’s bet has been to turn AI into a mass consumer product, but they have discovered that going viral is not the same as making money and that having so many eggs in so many baskets is not profitable. AI as a business product. While OpenAI was searching for its identity without a fixed direction, there was another company that was very clear: Anthropic. The startup focused primarily on business clients, those who do not have so many qualms about paying subscriptions of hundreds of dollars a month, and little by little it has been taking over OpenAI. The figures They are not lying: two years ago OpenAI had a 50% enterprise market share and today it has 25%, while Anthropic already has 32%. Image | Xataka with Freepik In Xataka | Sora’s closure is a sign: OpenAI takes a step back in the AI ​​race to completely recalibrate

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.