OpenAI, Broadcom and TSMC want to end their XPUS

Openai wants to hit the table in the race to dominate artificial intelligence. At the beginning of September we knew that “a mysterious client” made an investment in Broadcom of 10,000 million dollars for the design of AI chips. Through sources close to the company, we learned that it was OpenAI, with the intention of form a strategic alliance with both Broadcom and TSMC for the manufacture of own chips. Called ‘Xpus’, they have a clear objective: stop depending on Nvidia, something that Many great technology want. Have independence in AI is paid expensive. For Openai, depending completely on Nvidia to feed chatgpt and its future models is a capital risk, especially for having to deal with astronomical costs, and being limited to the technological evolution and stock of a company like NVIDIA. Creating your own chips allows OpenAI to optimize hardware specifically for your language models, reduce operating expenses and free yourself from the bonds of a single supplier. A strategy on the Apple line when He said goodbye to Intel to start making their own chips in their Mac. Although stop depending on Nvidia today is complicated. The golden trio behind the XPUS. Broadcom Lead the design Of these specialized processors, contributing their experience in custom chips for technological giants. The company controls approximately 70% of the Custom IA processors market and has already collaborated with Google in His tpus For years. TSMC, the world’s largest semiconductor manufacturer, will be responsible for producing chips using their advanced 3 nanometers technology. Mass production would predictably start in 2026with the first shipments arriving that same year. What are exactly the Xpus. These chips are not adapted gpus for AI, which is what is broadly marketing nvidia, but architectures designed from scratch With a single purpose: accelerate language models. While the NVIDIA GPUSs were born to rendering graphics and then adapted to artificial intelligence, OpenAi XPUS are specifically conceived for training and inference tasks of AI. They incorporate systemic matrices, high bandwidth (HBM) and integrated network capabilities, similar to the most advanced NVIDIA processors but optimized for the specific needs of OpenAI. The rebellion against Nvidia has several fronts. Openai is not alone in his efforts to break the hegemony of Nvidia. Google has been developing its TPUS (Tensioner Processing Units), which already go for its seventh generation. Amazon created its chips Trainium and Inferentia For AWS. Microsoft designed Maia For Azure. Goal collaborates with AMD In alternative solutions. Even from China, Huawei threatens face face In the next three years. It is clear that all large technological ones want to have greater control of their products, and for that they must go down to the chip supply chain to start rethinking the scheme. Nvidia’s real pit. Despite how the panorama is, Nvidia preserves An ace in the manga That will not be easy to overcome: CUDA. This development platform has become the industry standard for more than a decade. Virtually all the researchers and developers of the CUDA program, which generates a very powerful network effect. Changing to a different architecture not only implies acquiring new chips, but rewriting software, removing equipment and, in many cases, starting from scratch. This technology is the one that keeps Nvidia in a privileged position even before the avalanche of competitors. A dual strategy. The paradoxical of the case is that Openai is not abandoning Nvidia. Parallel to this own chips project, the company maintains An agreement of 100,000 million dollars With the Santa Clara giant for its Stargate project, which plans to build huge data centers until 2028. On the one hand, the calculation power supply with NVIDIA is ensured, while on the other they develop their long -term alternative. The idea is that XPUS are initially used internally, especially for inference (apply already trained models), while NVIDIA GPUS will continue to be necessary for training the most demanding models. The board is removed. Broadcom is clearly reinforced from this movement. The announcement of your agreement with Openai He shot his actions In the stock market and consolidate its position as a preferential partner for companies that seek custom chips. TSMC also wins: each new wave of specialized chips reinforces its indispensable role as a world reference manufacturer. Nvidia, on the other hand, saw his price slightly fall after knowing the news, although he continues to maintain A dominant position Very difficult to collapse. In addition, AMD could benefit if smaller companies, unable to develop their own chips, look for more accessible alternatives than NVIDIA. Cover image | NVIDIA, VECTEEZY, Village Global In Xataka | The companies of AI tell us that they want to achieve an AGI. What they are really conquering is the economy of attention

Openai signs with Samsung and SK Hynix for a potential chips demand of 900,000 wafers per month. It is an absurd figure

In Seoul A package of agreements was closed which reflects how far the career for artificial intelligence is coming. Openai sat down with Samsung and SK to advance his project Stargate And the companies pointed to a goal that surprises on its own: 900,000 DRAM wafers per month. The plan, according to the parties, goes through reinforcing memory production and studying new data centers in South Korea. All this was announced after a series of meetings of Sam Altman, business leaders and President Lee Jae-Myung himself. The appointment at the Seoul presidential office brought together Sam Altman With the leaders of the aforementioned Asian technological conglomerates, in the presence of the president Lee Jae-Myung. The tone was shared: Korea seeks to consolidate as one of the three global powers in artificial intelligence and OpenAi needs to anchor its Stargate project in regions with technological muscle. This lace explains the interest of both parties in formalizing agreements that cover from the memory supply to the construction of new data centers, with a long -term view. An objective that can tension the entire memory sector The volume that has been put on the table is disproportionate if compared to the market. According to Techinsightsthe global capacity of production of 300 millimeter DRAM was about 2.07 million per month in 2024 and would grow to 2.25 million in 2025. reaching 900,000 would mean about 39% of all that capacity. No individual manufacturer reaches such a figure alone, so that the magnitude of the agreement reflects both Openai’s ambition and the growing pressure to ensure the supply of advanced memory. Signed documents include preliminary commitments to expand memory production and evaluate additional infrastructure in South Korea. Among them is the participation of Samsung SDS in the development of data centers, as well as Samsung C&T and Samsung Heavy Industries in its design and construction. The Ministry of Science and ICT contemplates evaluating site outside the Metropolitan Area of ​​Seoul, and SK Telecom has signed an agreement to study the viability of a center in the southwest of the country. It is also proposed to explore the deployment of Chatgpt Enterprise and API capabilities in corporate operations. A key point in all this is in the difference between using and training a model. When someone consults a chatbot, infrastructure of inference is activated, much less demanding. But to train a new generation system, thousands of chips are needed working in parallel, each accompanied by High performance memory modules. This scale multiplies the need for servers, cooling systems and electrical power. In that context, guaranteeing hundreds of thousands of wafers per month does not seem an excess, but a way of ensuring that the next wave of models has the necessary material support. Stargate Data Center in the United States Openai’s computing muscle relies on huge draft alliances. With Oracle and SoftBankthe company prepares five data centers that would provide several capacity gigawatts. Nvidia, meanwhile, has announced that it would invest up to 100,000 million dollars and that would give access to more than 10 gigawatts through their training systems. Openai’s trajectory is not understood without Microsoft, his first great partner. The Initial bet of 1 billion in 2019 and the subsequent investment of 10,000 million gave access to the Azure cloud, Key to train models They promoted Chatgpt. Over time, however, Sam Altman’s company has begun to reduce that dependence. The last movements mark a change of course towards infrastructure in which OpenAI has more direct control, a way of making sure they are not conditioned to a single supplier. It should be remembered that many of the ads remain preliminary. Letters of intention and memoranda mark the will to advance, But concrete details have not yet closed. At the scale that Stargate raises, the risks are evident: from bottlenecks in the production of high performance memory to energy availability to feed facilities of several gigawatts. To this are added the necessary permits and the complexity of coordinating projects with so many actors. At the moment, the signed opens a path, but it remains to be seen what materializes and in what deadlines. Images | Sam Altman | Samsung | SK Hynix | Xataka with Grok In Xataka | I’ve been hooked to Sora 2 for two days: I’m generating absurd memes where I am the protagonist and I can’t stop

Openai has just presented Sora 2 with own Tiktok -style app. This is outlined a new wave of viral videos

You open the mobile and, in a few seconds, you see yourself inside a scene that did not exist a minute ago: you are the protagonist of a clip that seems really shot, with movements and rebounds that feel coherent. Mechanics has no mystery, you record a brief take to capture your voice and face, and the app “place” on stage. The striking thing is the feeling of control, you can chain planes and maintain the state of the world without anything falling. There begins the game of Sora 2. To understand what Sora 2 supposes to look back. The road opened in 2022, when Chatgpt He placed the generation of text at the center of the technological debate. That impulse gave way to image models and, in February 2024, to Sora’s first versiona prototype that already showed permanence of objects and some visual coherence. The competition soon react: Runway with his Gen-4 and other projects marked the pattern of a market in full boiling that now receives a new chapter. What’s new in Sora 2. Openai describes this model as a jump comparable to that in its day GPT-3.5 was for the text. The big difference is in physical loyalty: if a ball fails the hoop, bounces on the board instead of teleporting, a common error in previous systems. You can also maintain the status of a scene between plans and follow more complex instructions. In addition to video, it generates voices, effects and sound landscapes, which makes it a much more complete video-audio tool. The app: Cameos, Remix and Feed. Together with the model An application arrives Designed to use it social code. After a rapid face and voice verification, the user can be inserted as cameo in any scene and share the result with their contacts. The app allows you to remix videos of others and navigate a Feed which adjusts to preferences through natural language. Openai ensures that the experience is designed to encourage joint creation, not passive consumption, and that it opens under invitation in iOS. Welfare and security: controls and limits. The company led by Sam Altman He insists that his application does not seek to catch the user in an infinite scroll. It is designed to prioritize known people and encourage creation more than consumption. Base restrictions are applied for adolescents, as a limited number of generations and stricter permits in cameos. To this are added parental controls from chatgpt and the possibility of revoking any use of your image. There will also be human moderators for cases of harassment. Is it free? As we say, the application of Sora 2 gradually opens in iOS, with an initial deployment in the United States and Canada. Openai promises to extend it to more countries, although for now it is not available in Spain. Access requires invitation and can also be requested from the Sora.com website. The service starts free of charge, with use limits that depend on “the computing capacity.” In addition, the subscribers of Chatgpt Pro, the 200 dollars per monthThey have access to a Sora 2 version with higher quality, and the company plans to enable the model via API. Some failures remain. Among the spread examples appear anime fighting, explorers that shout under a snowbird or acrobatics that respect rigidity and flotation. The model manages to give the impression that the failures belong to the character himself, not to the algorithm. It also allows you to insert a real person into a scene with recognizable voice and appearance. Even so, Openai admits that errors abound and that general coherence is far from perfect, although it considers that the progress with respect to Sora 1 is evident. Sora 2 raises a scenario in which videos cease to be just a consumer product and become a shared game space. The key is in that ability to turn anyone into the protagonist of a clip ready to circulate. It may be a fashion of short duration or inaugurate a more participatory creativity stage. What seems clear is that Openai has returned to place the focus on the daily user, with a proposal designed to trigger waves of viral content. Images | OpenAI In Xataka | Chatgpt is already able to buy things for you: OpenAi has just launched a missile to the Amazon flotation line

Openai has just launched a missile to Amazon’s flotation line

Until now, when you were looking to buy an online product, you did Looking for it on Google, in Amazonor directly in the store of those products. In Openai they want to radically change that shopping experience so you don’t have to do any of that. It will be enough to ask Chatgpt to look for the product … and buy it. And that can change many things. What happened. OpenAI has announced The creation of an option called “Instant Checkout” (“instant payment”) that will allow users to purchase individual products directly through electronic commerce platforms. Etsy and Shopify, the first. This feature will be initially available for purchases in ETSY stores in the US, and soon for more than one million sellers than They use the Shopify platform. They can take advantage of the users of the free plans, Plus and Pro de Chatgpt, and OpenAi hopes to expand this program to more platforms and other countries in the future. Direct attack on Amazon. The new option raises a potential change in the way we bought things. Amazon has become that “Everything Store” (“Store for everything”) to which we go to all kinds of purchases. However, here we have an existential threat, because AI promises to help us with purchases by selecting the best products and prices without having to do practically anything (except confirming the purchase). The alliance with Shopify – each time widespread— It is another clear blow to Amazon’s business model, but the threat is triple. Not only search and can do it outside Amazon … Affiliates. But also every time you buy something with Instant Checkout in Chatgpt, OpenAi takes a commission. The price does not change for the buyer, but the seller (in this case, Etsy or Shopify) pays an Openai commission for convincing the user to make that purchase in that trade. The same goes for Google, whose search engine was also nourished by commissions by recommending products in their sponsored results. Conflict of interest. But with this type of option, the chatbot becomes no longer only an assistant who wants to help you solve doubts, but will also be an interesting recommendation of products. It is not clear how chatgpt chooses the products that it recommends, but it seems logical to think that there will be a system similar to Google or Amazon in which stores and shops can position their products to favor the recommendations. At the moment those responsible for OpenAI indicate That the products that they recommend comes from an “organic and sponsorship search, classified only based on their relevance to the user”. The process, once again, could be distorted, and would win who invests more in marketing and positioning to make their products “look more.” Stock bags. The reaction in the American bags makes clear what has been the reception of this characteristic: ETSY’s actions rose 16%, while Shopify – which does not yet have this option although it will do it soon – they rose 6%. For both platforms this option represents a potential push for sales and income, and AI can become its best ally to reduce the traditional dependence of Google or Amazon users to choose which products buy and buy them … And incidentally, agricultural protocol. Openai also stressed that to offer this characteristic they have developed their Agentic Commerce Protocola component that will allow more stores and developers to create new integrations. The protocol has been developed in collaboration with Stripe, and is available with a license Open Source (Apache 2.0) with its code In Github. Can you trust chatgpt to buy? Taking into account that chatbots They continue to make mistakes and hallucinatetrusting purchases to an AI model can generate doubts. Will it succeed with the purchase process? Will you really ask for what you want to ask for? Here it is true that the answer is much more deterministic, because Chatgpt has a catalog of products defined among which to search. In addition, the AI ​​model depends totally on the decision and action of the user: it is he who must click on “buy”, confirm the order and details, and complete that purchase integrated in the chatbot session. Here Chatgpt is a purchase assistant who tries to simplify the process, but of course in something as sensitive as purchases – with our money at stake – OpenAi can have serious problems if the process ends up having failures. In Xataka | Sending this 320 dollar goal from Japan to Spain costs $ 29. Sending it to the US costs 2,000, and it is not a typographic error

OpenAi wants to bill as much as Microsoft in five years. For this

OpenAi projects to enter 2030 about 200,000 million dollars. It is almost the same as Microsoft invoices today, 245,000 million dollars. A company that this year will touch the 12,000 million dollars believes that it will multiply its income in less than five years. To contextualize excess: Apple took four decades to reach those figures. Google, two decades. Openai intends to do so in decade and a half of life, with a nuance: Until three years ago I did not invoice or one hundred million. His “zero moment” was in 2022. The planned growth graph, published by The InformationIt has many layers. The excessive ambition is just one of them. The income is triggered exponentially, but the computer costs – both training as an inference – grow almost linearly. This equation only works if Openai ceases to be what it is today: a company that sells access to LLMS for 20 dollars a month. It needs to be something else that goes much further. The question is not whether they can multiply their income by 17, but what they have to invent to justify such assessment. The secret is in the agents. But not what we imagine. Openai does not aspire to sell you a smarter chatgpt. Aspires to replace entire departments. Deep Research The model already hints: do not charge for consultation but for work done. If a report that previously required three Junior analysts for a week now does an agent in a few minutes, supervised by a single employee, how much is it worth? It is not worth $ 20 of a subscription. It is worth $ 50,000 that these salaries cost. Multiplied by each department of each company of Fortune 500 … Suddenly, the 200,000 million do not seem science fiction. They seem to conservatives. But here comes the existential paradox of OpenAi: pato capture that value need their models to be irreplaceable, unique, unattainable. However, every month that passes, the gap with Claude, Gemini or Deepseek narrows. The Commoditization of the AI It is not a future threat: it is already happening. How do you justify monopoly prices when your product is becoming water or electricity? Openai’s response seems to be the speed: Arrive first. Dominate the market. Create dependence before others can react. It is the old strategy of companies such as Uber or Amazon: losing money to buy market share, praying so that when profitability comes, you are the only one standing. Plan B is in vertical applications. They will not sell generic but specific solutions: The complete customer service system of your company. The educational platform of your university. The legal co -pilot of your office. Each vertical, a new market of billions. This is where the numbers begin to make sense. Microsoft 365 generates Microsoft almost 100,000 million annually. The World Business Software Market Billón approaches. If OpenAI captures just 20% replacing traditional software with intelligent agents, it reaches its goal. You don’t need to invent anything new. You just need to make everything that exists obsolete. Openai’s real bet is not as technological as temporary. They are buying time with 350,000 million in computer costs, betting on the AGI “Or something similar enough, that For something Altman has been moving the goal for some time– It arrives before the money is over. If they get it, those 200,000 million will be an anecdote. If they fail, we will have seen the most spectacular bubble in technological history. And the fascinating thing is not that Openai is trying. Is that everyone who imports –Microsoft, Oracle, Softbank, the US government– They seem to believe they can achieve it. Outstanding image | Adolfo Félix In Xataka | The alliance between Oracle and Openai does not go only from data centers: it goes from advanceing Google, Apple and Microsoft on the right

Openai has just changed the chatgpt rules with Pulse. Stop waiting questions and start anticipating your daily life

You get a notification to the mobile. It is not the calendar or mail: it is from Chatgpt. It’s called Pulse and, According to OpenAi“investigate proactively“To give you a personalized summary of the day with thematic cards that you can get quickly or open for more detail. The grace is that you stop waiting for your question and advance with ideas and next steps, learning from your chats and your feedback And, if you decide to connect them, of apps such as the calendar. The result is a Briefing Matinal that tries to fit with your routine before it starts. Pulse arrives as a view and, for now, is only available in the ChatgPT mobile application for payment users in THE PRO PLAN. It does not replace the usual model, but is presented as an addition: the assistant maintains the option to answer on demand questions, but adds a new functionality. With this movement, Openai takes the first step towards an assistant who aspires to be present before even the user invoices it. Of the chatbot that responds to the assistant who advances Every night, the system analyzes recent conversations and interactions history to prepare a set of cards with information selected These cards are presented the next day in the application as a daily summary that can be browsing in seconds or expanding to obtain more context. The content expires at the end of the day unless it is saved in the history of chats. In addition, each card can be opened to request clarifications or following steps, so that the experience is not limited to reading, but connects with the usual conversation. Personalization is built on simple signals. The user can give a thumb up or down, ask that the next summary include a specific topic or modify what is not useful. Pulse collects that information and applies it in the next night cycle. Openai points out that all adjustment history is accessible and reversible: it can be consulted or erased when desired. To reduce risks, each set of cards undergoes safety checks that block problematic recommendations or contents that violate the platform standards. One of the characteristics is in the possibility of COnectar Gmail and Google Calendar. In doing so, press can suggest an agenda scheme for a meeting, remember the purchase of a birthday gift or recommend restaurants based on a scheduled trip. These integrations are deactivated by default and are managed from the configuration. Openai insists that they improve the relevance of suggestions, although they also expand the surface of personal information that the assistant handles. The examples are varied and very everyday. Openai mentions from tips to prepare a quick dinner to reminders linked to a trip or training suggestions for a triathlon. In the Chatgpt Lab, several students commented that the utility of Pulse became evident when they began to guide it with concrete requests. One of them reported that, after talking about how to organize his calendar in Taiwan, the system offered him practical steps to optimize train journeys that would not have looked for himself. Openai has been working on the security of his chatbot for some time. Even so, cybersecurity experts warn that the risk never disappears completely. Radware documented a case in which an altered mail managed to The in -depth research function of chatgpt will filter sensitive data. Vulnerability was already corrected, but the example reminds that integrating personal information into such an assistant increases exposure and demands to keep caution. For now, Pulse is in a view phase and only those who have the Pro subscription in the mobile app. OpenAI warns that not always right: Reminders of already closed projects or little relevant suggestions may appear. The idea is to collect that early use to correct failures and refine the model. If everything progresses as planned, the function will open first to Plus clients and then to the rest, in a progressive deployment. It is a launch that fits a broader strategy: to make Chatgpt become a daily assistant and not only a specific tool. OpenAi seeks to increase the time of use and take a step towards the more personal relationship With the application. The movement also marks distance in front of competitors such as COPILOT of Microsoft or Claude of Anthropic, which until now have prioritized professional or productivity uses. According to Reutersthe company also works on a browser with AI that would reinforce this commitment to accompany the user in more facets of their digital life. Images | OpenAI In Xataka | Microsoft has never been so valuable throughout its history. And he has never been so close to the abyss

“Circular financing” between Nvidia and Openai can be the genius of the century … or collapse

Nvidia has announced A “strategic investment” of up to 100,000 million dollars in Openai. But it is an investment with trap: Openai will use that money to buy Nvidia chips. The semiconductor manufacturer thus becomes the financier of its own most important client. Why is it important. This maneuver dangerously reminds the “circular financing” schemes that characterized the end of the 2000 Puntocom bubble. Companies like Lucent, Nortel and Cisco financed operators as Global Crossing to buy them equipment. We are not the first to see this simile At this stage of AI. When the bubble exploded, both suppliers and customers sank into a spiral of debts and overcapacity. The agreement will allow OpenAI to build data centers with a joint capacity of 10 gigawatts, equivalent to about 10 nuclear reactors. Jensen Huang, CEO of Nvidia, has acknowledged that this represents between 4 and 5 million GPUS: “double those we distributed last year.” Brutal scale In figures. The numbers are astronomical. According to Huang himself in August, creating a 1 Gigavatio data center costs between 50,000 and 60,000 million dollars, of which about 35,000 million are destined for Nvidia chips. With that logic, the 10 projected gigawatts would cost more than 500,000 million dollars. The bags have reacted with euphoria: Nvidia shares rose almost 4%, adding 170,000 million dollars to their stock market capitalization. Jensen Huang Broza’s company is already 4.5 billion dollars of valuation. Yes, but. Parallelism with the ‘Puntocom’ bubble is disturbing. These same schemes of ‘Financing vendor‘We already saw them in the final stage of the 2000 technological bubble. They did not end well for any of the parties. The difference is that current numbers are much larger, even adjusting for inflation. The key is whether the productivity profits of the generative AI will compensate for the spent money. Between bambalins. The agreement explains the current situation in the AI ​​ecosystem: OpenAi desperately needs computing capacity to maintain its competitive advantage over the 700 million weekly users of their products. But infrastructure costs are so high that it needs constant external financing. Nvidia, on the other hand, seeks to ensure the future demand of its most advanced chips. The agreement guarantees mass orders while consolidating its dominant position against competitors such as AMD and Intel. “It is a closed cycle: Nvidia gives OpenAi money, and OpenAi uses it to buy Nvidia products,” Summary Summary Javier Pastor. The threat. Anti -Ponopoopoly experts are already arched eyebrows. Andre Barlow, a lawyer specialized in competition, explained to Reuters that “the agreement could change the economic incentives of NVIDIA and OpenAI, potentially blocking the Nvidia chips monopoly with OpenAi software leadership.” The structure creates extra barriers so that competitors such as AMD in OpenAi chips or rivals in AI models can climb their operations. They paint basts. In perspective. The story is full of similar schemes that ended badly. Global Crossing, the telecommunications operator that broke in 2002it was funded precisely by the same suppliers that sold equipment. When it was discovered that the real demand was much lower than the projected, both Global Crossing and its financiers lost thousands. The key question is whether the demand for AI services will be sufficient to justify this billionaire investment, or if we are faced with the recreation of the same speculative pattern with even more exorbitant figures. As Stacy Rasgon concludesBernstein analyst: “On the one hand, Openai helps meet very ambitious infrastructure objectives. On the other hand, it will further feed concerns about ‘circular’ financing.” Outstanding image | In Xataka | Openai estimates that it will enter 200,000 million dollars in 2030. The figure, like everything in OpenAi, is extremely ambitious

Nvidia will invest 100,000 million dollars in OpenAI. Actually a single euro will not be spent

Openai has signed a “strategic agreement” with Nvidia. According to this agreementNvidia “intends to invest up to 100,000 million dollars” in OpenAI gradually, but the truth is that this investment is misleading. Especially since Openai will spend those 100,000 million dollars to buy GPUS to Nvidia. Everything remains at home. What happened. These two companies have initiated the procedures to complete an agreement with a clear objective: create and display AI data centers With a joint gigantic computing capacity: 10 GW. The investment will be made gradually and will be completed “as each gigawatt” of computing capacity is installed in those Data centers. Nvidia will thus become a “computing partner and strategic connectivity” for the development plans of new data centers, says Openai. Millions of Gpus. According to Jensen Huang statementsCEO of Nvidia, that represents between four and five million gpus. Or what is the same: it is the number of units of their GPUS of ia that they expect to distribute this year, and “twice the ones we distributed last year.” The strategy “seller finances buyer”. This agreement is not a simple investment, but a strategic association in which the hardware provider invests a massive amount of money in its main client. In return that client undertakes create a mass infrastructure With supplier technology. It is nothing more than a closed cycle: Nvidia gives OpenAi money, and OpenAi uses it to buy Nvidia products. This sounds like a bubble. There is Several analysts that They speak How this remembers once again The bubble of the Puntocomwhere companies lent money to buy products from the other. That raises suspicions and questions about the long -term sustainability of these agreements. Companies becoming stronger among them. The circular agreement serves in fact to strengthen both companies and solidify their positions as dominant and indispensable actors in the AI ​​industry. In fact, this strategic alliance makes rivals like AMD or Intel very difficult. Nvidia is worth 170,000 million dollars more. The announcement caused immediate reactions in the NVIDIA assessment, whose shares increased almost 4%. The stock market capitalization of the company of Jensen Huan grew by 170,000 million dollars in that session and already touch the 4.5 billion dollars, and manages to distance itself even more from Microsoft, Apple or Google, which already exceed three billion. Long live Hype. Here once again there is a reinforcement of the speech of expectations and Hype. The confidence of these companies in the future of AI is patent, but they are interested and for now Openai’s income – no rivals – are well below spending They are doing in these technologies. Energy challenge. The plans to create infrastructure with 10 GW capacity are also astronomical. According to Some estimatesthose 10 gigawatts They are equivalent to the production of about 10 nuclear reactors, which normally provide a capacity of 1 GW per plant. A colossal cost. The current data centers range between very modest capabilities of 10 MW and other extraordinary 1 GW. Openai’s plans would leave those facilities very behind in computing capacity. In August Huang told investors to create a 1 GW data center is a cost of between 50,000 and 60,000 million dollars, of which about 35,000 are dedicated to Nvidia chips. With those figures, the total cost of those 10 GW of joint computing power would amount to more than 500,000 million dollars, a figure that – one—curiously— It coincides with that of the Project Stargate. Image | Flikr (Techcrunch) | Nvidia In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Openai has a problem with the “Codex” brand. These are all the codex that manages

Openai has just launched GPT-5-Codex. The problem is that I already had three more calls exactly the same. Why is it important. This accumulation of identical names converts the choice of tools into a headache. Each “codex” does something different, but from the outside it seems the same multiplied product. In detail. The “Codex” family has these members: GPT-5-Codexthe newcomer. A model that program for hours without supervision. Change speed according to complexity: fast for simple, slow and meticulous tasks for large projects. Codex Cloudthe veteran. It works as a remote programmer. You send you work and return with code finished after a few minutes of solo work. Codex Clithe local assistant. A terminal utility that helps you from your computer. Competes directly with tools such as Claude Code. Codex (2021). The grandfather of the family. Fed the first versions of Github co -ilotbut it is no longer operational. Between the lines. Openai is trying to fix the linguistic mess. Now writes “GPT-5-codex” with scripts to differentiate it, implicitly admitting that the situation has been lacking. The new model reduces the use of resources into basic tasks by 94%, but multiplies by two the processing time in complex projects. Internally it already supervises more code than human reviewers. The background. Openai seems to have developed these tools without central coordination, something similar to what ended up with the pre- models selectorGPT-5. Each team chose “Codex” independently. And now what. The company prepares access via API for its latest model. Meanwhile, it is time to assume that “Codex” is more a business philosophy than a specific product. The lesson: even the most advanced companies can stumble with something as basic as putting names to their creations. Outstanding image | OpenAI In Xataka | We thought that Chatgpt was used mostly to work. Openai herself has just demonstrated otherwise

We thought that Chatgpt was used mostly to work. Openai herself has just demonstrated otherwise

For months, many of us assumed that Chatgpt It had become the perfect tool for the office work and also to program. OpenAi has published His first detailed study about what users really do and who they are, and the portrait breaks that intuition: most conversations do not work. Personal use dominates and grows. The data reflect a notable change in the type of use of ChatgPPT: in June 2025, 73% of the conversations were not work, when in June 2024 the percentages were almost tied. And there are other interesting data: the public is mostly young, with about half of the messages sent by people between 18 and 25 years old. To this is added a turn in the gender profile: the first records showed predominance of male names, but in 2025 52% corresponds to female names. More than work: chatgpt shows your most personal face The company classified more than one million conversations in seven major categories. The most common, “practical orientation”, supposes 28.3% of all interactions and includes help requests for daily tasks, academic consultations or training tips. The study also draws a curious panorama: it points out that adoption grows faster in less rich countries, although it does not list uses by country. The second major block are requests related to writing, which highlight the edition or criticism of texts and personal communication. The programming also appears, which concentrates only 4.2% of the analyzed chats. A trend that gains strength is the search for information. Openai states that these types of consultations have become a nearby substitute for web search engines. Between May and June 2024, it grew constantly, until it was placed as the second most common use. In this section, questions about products also appear, which suppose 2.1% of the consultations within that category. These data plan questions about the future of searches and how the company led by Sam Altman He is challenging Google. Another relevant block is that of personal advice and intimate conversations. The report indicates that 1.9% of the interactions are related to reflections and relationships, And 0.4% are role -playing games, including the use of chatgpt as virtual “companion”. Although the study insists that they are small figures, The issue is in the spotlight in several countries due to the impact of this technology on the mental health of some people. reflections and relationships, The study has 62 pages and covers the period from May 2024 to June 2025, with data from 1.5 million users and a sample of 1.1 million conversations. If the question is how Openai has achieved to obtain this information, the company says it has used its own models to analyze the messages, preventing human researchers from reading individual conversations. Demographic information comes from the data that users provide when registering. Images | Solen Feyissa | Levart_photographer In Xataka | China is selling us a future full of humanoid robots. We have (many) doubts

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.