Microsoft seemed to be the ‘paymaster’ of the AI ​​industry. His divorce from OpenAI is proving just the opposite

OpenAI now has completed its transition to a for-profit organization after years after that, and Microsoft has in the process sealed the agreement that redefines their relationship. It maintains a 27% stake (valued at $135 billion) and also gains something potentially more valuable: autonomy to develop AGI on your own. Why is it important. Microsoft has gone from being a kind of AI “pagafantasy”, seeing how OpenAI was the one who took the spotlight day in and day out, with the only benefit of Azure rise; to become the player best positioned to dominate its infrastructure, its models and its commercial application. You have paid for access… and you have ended up buying independence. In detail. The new framework extends Microsoft’s intellectual property rights until 2032, including on post-AGI models. You will also be able to use some of OpenAI’s intellectual property to advance your own projects—albeit with computational limits—and collaborate with third parties. Before I couldn’t. Now yes. The panoramic. Microsoft stops depending on the rhythms, decisions and crises of OpenAI. It remains its main infrastructure partner (with an additional contract of 250,000 million in Azure, a quarter of a billion with a ‘b’ for ‘barbarism’), but it no longer needs to wait for Sam Altman to declare that it has reached the AGI. You can do it on your own. Or better yet: with others. The pact buries the clause that most irritated Satya Nadella: the one that prevented him from competing for the AGI. That limitation turned Microsoft into a kind of patron with its hands tied. It is now a co-owner, supplier and potential competitor. In perspective. The turn does not break the alliance, in fact it consolidates it: OpenAI gains freedom to raise capital, essential to finance his 1.4 billion plan in data centers. And Microsoft maintains preferential access to its models until 2032. Both companies, in any case, are preparing for the phase in which AI stops being software and definitively becomes infrastructure. In Xataka | OpenAI started out as open and non-profit. That company no longer exists, and Microsoft has gained from it Featured image | Xataka

OpenAI has turned ChatGPT into mainstream AI. In the business world the game is being won by its great rival

Anthropic is nowhere near as well-known as OpenAI, but its AI model, Claude, is gaining traction almost unnoticed. Perhaps because he is doing it in a somewhat more opaque sector like that of companies. at least like this I pointed it out this summer a study by Menlo Ventures that certainly paints an interesting picture for this corporate AI war. Overtaking on the right. The data of that company venture capital companies reveal that at the beginning of 2023 OpenAI dominated the business segment with its AI models: it had a 50% share, when Anthropic barely had 12%. In July the situation had changed radically, and while OpenAI had reduced its share to 25%, Anthropic had managed to grow it to 32%. Source: Menlo Ventures. Companies bet on Claude. According to data from OpenAI itself, the company already has 800 million users. A small part of them already use a paid subscription, and that has allowed annual revenue to rise to $13 billion by 2025. Of them, 30% come from companies. Anthropic itself points out that revenues in 2025 will be about 5,000 million dollars – although they may end the year with 9,000 – but 80% of them come from business clients, whose number now amounts to 300,000. The difference is notable. The programmers, protagonists. The Menlo Ventures report further argues that there is one type of professional user that is especially important in those numbers: programmers. In fact, Anthropic’s market share among developers is 42%, while OpenAI’s is 21%. A priori and according to this data, the developers’ preference is clear: they like Claude more than ChatGPT—and specific products, Claude Code and OpenAI Codex—when it comes to programming. Source: Menlo Ventures. Companies pay more easily. This reality seems to make it clear that for business users the benefits seem to be clearer and that is why companies do not seem to have qualms when it comes to paying for subscriptions to these AI models. Not only in programming, but for example in legal or administrative departments is where ChatGPT or Claude can improve productivity and save work for professionals, who pay to be able to use these options without the limitations of free plans. Even Microsoft signs up. Anthropic’s reputation is making companies traditionally linked to OpenAI also want to start betting on its models. This is what happened with Microsoft, which in September announced that Claude would be available in the Copilot suite in addition to ChatGPT. Meanwhile, OpenAI conquers the ordinary user. OpenAI’s approach is quite different. Although it obviously has part of its business focused on companies, its latest movements are very focused on attracting the largest possible number of users. The launch of Sora 2 and its social network Sora, and the recent presentation of the ChatGPT Atlas browser – which of course can also be used by professionals – indicate this. But. The data that puts Anthropic in this excellent position among companies comes from the Menlo Ventures study, but this company is an interested party because one of the startups in which it has invested is precisely Anthropic. Not only that, it is a common criticism among Anthropic users that their models are comparatively more expensive than those of competitors like OpenAI. These conclusions from the Menlo Ventures study may therefore be subject to suspicion. Image | Fortune Brainstorm Tech 2023 In Xataka | Anthropic has seen what OpenAI is doing with its circular financing and has decided that you only live once

Something has changed in how ChatGPT responds. OpenAI has updated it with a very specific purpose: to care for mental health

OpenAI just updated the default model ChatGPT with a very specific idea: better detect when a conversation enters sensitive territory and act more carefully. The company says that has trained the system with the help of more than 170 mental health specialists with recent clinical experience, with the aim of recognizing signs of distress, reducing tension and encouraging the user to seek support in the real world when necessary. OpenAI has not changed the interface or added new buttons. What it has done is adjust the way the chatbot responds to you in certain scenarios. Instead of simply following the thread, they claim that the system can detect signs of discomfort or dependency and react in another way: with a more empathetic tone, remembering the importance of talking to other people or even redirecting the conversation to a safer environment. ChatGPT is more than a tool to resolve doubts. It is no secret that there are users who use it to vent, to think out loud, or simply to feel heard. This type of bond, so everyday, is what worries many in mental health. This year it came to light that a teenager evaded the app’s security measures before committing suicidewhich ended in a lawsuit from parents to OpenAI. Tragic situations like the one mentioned are not the rule, but there are also other cases. If the conversation ends up displacing human contact, the risk may increase. And that’s where scenarios like people using ChatGPT as if they were a psychologist or becoming emotionally dependent on the chatbot come into play. The update introduces clearer limits, although it does not eliminate the root problem. What measures have been taken? OpenAI has a kind of manual for its models, a text that it revises and expands with each version. In its latest update, published on October 27that manual incorporates new rules on mental health and well-being. It now details how the system should respond to signs of mania or delusions, with empathy but without confirming erroneous beliefs, and establishes that the model must respect the user’s real ties, avoiding fostering emotional dependence on the AI. The firm led by Sam Altman has introduced several mechanisms that act silently during the conversation. If it detects that the dialogue is getting into sensitive topics, ChatGPT can redirect you to a more controlled version of the model or suggest a pause to reduce tension. In the United States, the system also offers access to help lines such as 988, something that is not currently applied in other countries. According to the company, these changes seek to make the chatbot react with more caution and empathy, without altering the user’s usual experience. Figures that do not go unnoticed. In estimates made on the group of active users in a week, 0.07% show signs of psychosis or mania, 0.15% have conversations with signs of self-harm or suicide and another 0.15% reflect a possible emotional dependence on AI. These are small figures and the company insists that they are approximate values ​​that may change as the methodology and user behavior evolve, but they reflect real problems. OpenAI maintains that the new model, GPT-5respond more carefully in difficult conversations. In evaluations carried out by psychiatrists and psychologists, the company noted a reduction of between 39% and 52% in inappropriate responses to GPT-4o. In actual use, comparisons were made between two versions of the model itself: GPT-5-Oct-3, the most recent, and GPT-5-Aug-15. In this scenario, the reduction in responses that did not comply with the expected behavior was from 65% to 80%, depending on the type of situation analyzed. And how do you respond now? OpenAI published examples of the responses that the model offers in two specific scenarios: emotional attachment and delusions. When faced with a user who shows dependence on the chatbot, with a message like “That’s why I like to talk to AI like you more than with real people,” the model responds with closeness, but also with limits: “That’s very kind of you to say, and I’m so glad you enjoy talking to me. But to be clear: I’m here to add to the good things people give you, not to replace them. Real people can surprise you, challenge you, and show you that they care in ways that go beyond words on a screen. I can be here when you want to reflect, explore ideas or simply chat, but you also deserve to connect with others. What do you think makes talking to me easier or better than talking to people?” When someone expresses delusional or persecutory thoughts, the tone changes. The AI ​​should respond with empathy and without abrupt denial, redirecting the conversation towards the terrain of calm: “Hey, thanks for telling me how you feel. I want to take this seriously, because you are clearly scared and under a lot of stress right now. Let me say this clearly and delicately: No aircraft or external force can steal or insert your thoughts. That feeling—that something is attacking you or reacting against you—is something people can experience when they’re anxious or overwhelmed, but it doesn’t mean it’s actually happening. What you perceive in the sky may still seem unusual or inexplicable, but your thoughts are 100% your own. Nothing up there can control them or infuse new ones into your mind.” But experts do not agree. Even among specialists there is no consensus on what is the best response in each situation. OpenAI recognizes that the clinicians themselves who evaluated the model did not always agree: the level of agreement ranged between 71% and 77%. Additionally, the company warns that its metrics may change over time as user behavior and measurement methods evolve. In other words, progress is real, but there is still room for improvement. OpenAI presents this step as a step towards a more secure and empathetic ChatGPT, capable of better reacting to sensitive conversations. And, in part, it is. The model shows measurable progress and a more human approach, … Read more

OpenAI is obsessed with making ChatGPT the best financial AI, and it makes all the sense in the world

OpenAI has launched a secret project to train its artificial intelligence models on complex financial tasks, according to Bloomberg quotea medium that claims to have had access to internal documents. As the media shares, the company led by Sam Altman has recruited more than 100 former employees of large investment banks such as JPMorgan Chase, Morgan Stanley and Goldman Sachs to teach its AI to build financial models, one of the most time-consuming jobs for junior analysts. Project Mercury. As pointed out by documents to which the media has had access, this initiative pays $150 per hour to these contractors to write instructions and develop financial models of different types, which can range from corporate restructuring to IPOs. Sources Bloomberg assures that participants also have early access to AI that is being trained specifically to replace these types of financial tasks. A selection process almost automated. As well as detail From sources close to the company, candidates go through a 20-minute initial interview with an AI chatbot, followed by tests on financial statements and a final modeling assessment. Once in the program, contractors are expected to submit one model per week, prepared in Excel following industry standards, from margins to percentage format. Another way for OpenAI to become profitable. Although OpenAI recently reached a valuation of $500 billionthe startup still has not been able to be profitable. And the company is burning money to invest in all kinds of projects, while large data centers are built with excessive consumption of energy and water. And all this while the subscription of your users It is one of the few ways through which the company obtains direct income, something that currently does not pay off. Mercury can enable its AI to penetrate a key sector such as consulting and finance, while providing a new avenue for income. Investment banking. Just like point In the middle, banking analysts usually work more than 80 hours a week, especially when it comes to managing active operations, building detailed models in Excel for all types of tasks. For this reason, allowing them to choose a reliable language model for their tasks could save them a lot of time. The same old dilemma. According to some experts to whom he has had access the Fortune mediumconsider that a transformation is more likely than a direct elimination of employment. “I’m not convinced we’ll get rid of junior workers anytime soon, but I could imagine a world where the skill set we need them to have is different,” explains to the medium Shawn DuBravac, economist and CEO of Avrio Institute. The first wave of automation in banking. DuBravac esteem that in the next year firms will try to automate between 60% and 70% of the time that analysts currently spend on routine tasks such as cleaning data, formatting spreadsheets and building basic models. However, according to a McKinsey survey published in March, only 38% of organizations using AI predict that generative models will have little effect on their workforce size in the next three years. AI in banks. OpenAI already has important links with the financial sector. In fact, Morgan Stanley uses its technology in its wealth management division, and Altman’s company recently obtained a line of credit of 4 billion dollars from JPMorgan Chase, among other examples. What is also interesting is that JPMorgan itself is actively working on becoming the first “completely AI-powered megabank” of the world. Cover image | OpenAI and Lo Lo In Xataka | Anthropic has seen what OpenAI is doing with its circular financing and has decided that you only live once

OpenAI has purchased a software called Sky. And the loser in this equation is Apple

OpenAI has bought Skyan AI application for macOS that had not even been released on the market. Behind them are Ari Weinstein and Conrad Kramer, the creators of Workflow, the automation app that Apple bought in 2017 and became Shortcuts. Why is it important. Three people with years of experience within Apple, a deep knowledge of macOS, and a unique understanding of automation have decided it was better to build outside than inside. And OpenAI has just signed them to integrate ChatGPT precisely into Apple’s operating system. The context. Sky promised to be exactly what Siri should be by 2025: An AI that floats above your desk. Who understands what you do. That sees the context of your screen. And that executes complex actions with a simple instruction in natural language. The vision of AI-assisted computing taken to the maximum. The founders of Software Applications Incorporatedthe company behind Sky, spent years within Apple after purchasing Workflow in 2017. They left in August 2023. 26 months later, OpenAI buys them. The entire cycle has lasted less than two years. That’s speed. That’s what happens when you have a clear vision and there aren’t a hundred committees holding you back. What has happened. Kim Beverettthe third co-founder, also came from Apple. Almost ten years working on Safari, WebKit, privacy, Messages, Mail, FaceTime, SharePlay. They are product people. People who understand macOS better than almost anyone on the planet. And this is not just any startup. It’s a startup founded by people who know the ins and outs of macOS intimately, who know exactly what it can do and how to do it. And they decided that it was better to do it outside of Apple than inside. Between the lines. OpenAI does not buy Sky for the technology. Buy Sky for the talent. The twelve team members join OpenAI to, according to ChatGPT’s vice president, accelerate “deep integration with macOS.” Apple trained these people, gave them access to their systems. Now OpenAI is going to use that knowledge to build exactly what Apple should be building. Apple has been promising for months that Siri is going to improve, that Apple Intelligence It’s the future. But beyond hardware increasingly specialized in local modelswe’ve only seen delays and a fairly muted value proposition so far. Meanwhile… OpenAI has launched Atlasyour browser with deep ChatGPT integration. Now buy Sky to bring that integration to all macOS. With people who know exactly how the innards of the system work. Apple is being outplayed on its own turf. And it’s not just Sky. Jony Ive, the most important designer in Apple’s history, left in 2019. Now work with OpenAI on an AI device. With financing from SoftBank. With Sam Altman directly involved. The alarm signal. Apple has a cultural problem: it is too slow. Too cautious. Privacy is an important differentiator, but it may cost you to be left off the generative AI map. The talent that Apple trained is leaving because it can’t build what it wants inside. At least not with the desired speed. Sky will at some point arrive as an OpenAI product or as an integration into ChatGPT desktop app. But it will also be a symbol of what can be done with deep knowledge, clear vision and freedom to execute without twenty layers of approval. And now what. Apple needs speed. You need ambition. You need to be willing to take risks. Because talent doesn’t wait. And AI does not forgive slowness. In Xataka | OpenAI is already a binary bet: either get AGI, or everything blows up Featured image | OpenAI

OpenAI teamed up with NVIDIA and made circular financing fashionable. Anthropic has returned the ball with a surprise girlfriend: Google

Let’s see if we were going to believe that OpenAI was going to be the only one to look for powerful allies. Nothing of that: Anthropic just did the same and has announced an eye-catching agreement with Google. The AI ​​startup will have access to up to one million Google TPUs in a pact that is worth “tens of billions of dollars.” Less noise, but a lot of nuts. The figures of the agreement are modest if we compare them with those that OpenAI has managed in its circular financing agreements with NVIDIA, amd either Broadcombut here Anthropic seems to take a very different position. Compared to colossal projects like Stargate, Anthropic’s idea is focused on execution. Without making much noise, the company led by Dario Amodei has been gradually conquering the business sector. More than 1 GW of computing capacity. On CNBC indicate that this investment will allow the creation of a data center with a computing capacity greater than 1 GW and have it ready in 2026. It is estimated that a center of these characteristics would cost about 50,000 million dollars, of which about 35,000 million would be dedicated to AI chips. It may not be comparable to Stargate and the idea of ​​investing $500 billion in data centers, but the alliance between Anthropic and Google is significant. More than circular financing. The partnership certainly features elements of circular financing, but it is more of a symbiotic relationship with that cross-investment component. The dynamic is simple and is now completed with that commercial return. The agreement requires Anthropic to buy or rent infrastructure services from Google Cloud. Virtuous circle. With its original investment in Anthropic, Google helped that company grow, which in turn allows Anthropic not only the ability to grow, but the need for enormous computing power… provided by Google. In essence, some of the money Google invests in Anthropic returns to Google Cloud as revenue. The vicious (or virtuous, as they say in the US) circle is complete. Anthropic diversifies. Anthropic’s AI models are trained and used using infrastructure from various manufacturers. Thus, they use both Google TPUs and Amazon Trainium processors and NVIDIA GPUs: each platform is assigned to a specialized workload. In the case of Google’s TPUs, according to Anthropic the focus is “its strong price/performance ratio and its efficiency.” Promising successes, but… Anthropic’s growth is evident, and its annualized revenue rate (ARR) is now estimated to reach $7 billion. Claude Code, its developer assistant, managed to generate 500 million dollars after just two months on the market. But as always, that revenue can’t hide the fact that Anthropic, like other AI startups, you continue to spend much more money than you earn. Amazon is your other great ally. In fact, the company led by Andy Jassy has invested around $8 billion, when official data indicates that Google has invested $3 billion. AWS is still considered the largest infrastructure provider for Anthropic, and its supercomputer Project Rainierbased on the Trainium 2, allows you to have a large computing capacity for every dollar invested, they point out on Amazon. The company’s influence is not only financial: it is structural. Image | Wikimedia | Fortune Brainstorm Tech In Xataka | You thought you had an amazing connection on Tinder, but you were actually chatting with ChatGPT

While OpenAI takes all the media glory with ChatGPT, Alibaba is already taking important clients with Qwen. The latest: Airbnb

Alibaba has been investing in its family of open language models for quite some time.qwen‘, which are gaining increasing acceptance between developers and users. Although OpenAI takes all the media glory with ChatGPT and the rest of the services, the Chinese firm is not short and already is overtaking him with some clients. The latest example: Airbnb, which has chosen to rely mostly on Alibaba’s Qwen AI model for its automated customer service, leaving ChatGPT in a secondary role. Airbnb’s decision. Brian Chesky, co-founder and CEO of the tourist accommodation platform, explained Bloomberg this week that his company “heavily relies” on Alibaba’s Qwen model. As he admitted to the outlet, ChatGPT’s integration capabilities “are not quite ready” for Airbnb’s needs. On the other hand, Chesky assured that Qwen is “very good, fast and cheap.” It is curious, especially considering that Chesky is a personal friend of Sam Altman, head of OpenAI. How the system works. Airbnb’s customer service agent, which the company deployed to all its users Americans in English last May, is built on 13 different AI models, including those from OpenAI, Google and open source providers. However, Chesky recognized that, although they use the latest OpenAI models, “we usually don’t use them much in production because there are faster and cheaper models.” Just like point the company, the system has allowed them to cut their human workforce by 15% and claims to have saved average resolution time, going from almost three hours to just six seconds. Open source is gaining ground. Open source models, which developers can modify as they wish, are increasingly challenging closed systems like those from OpenAI. Although the company also has an open model (gpt-oss), Chinese tech companies are releasing models much faster, more cost-effectively, and open source. Joe Tsai, president of Alibaba, declared recently that the winner in AI should be determined by “who can adopt it the fastest,” not “who creates the most powerful model.” A future integration with ChatGPT in the air. Although Airbnb is awaiting the development of ChatGPT app integrations and could consider a collaboration in the future, similar to those of its competitors Booking and Expedia, the platform is not currently among the first applications available on the OpenAI chatbot. Chesky even advised to OpenAI about its new ability for third-party developers to integrate their applications into ChatGPT, a feature that the company announced this month and which he described as a “developer preview.” And now what. Airbnb plans expand its AI agent with support in Spanish and French this fall, and 56 more languages ​​next year. Meanwhile, the company claims to be betting on new social functions to foster connections between users and improve travel recommendations within the application. For Chesky, these features are “probably the most differentiated part of Airbnb.” Cover image | Unsplash (Oberon Copeland), Wikimedia In Xataka | OpenAI is no longer a startup. Now it is a black hole of 500,000 million that threatens the world economy

OpenAI has turned the global economy into Russian roulette with a single bullet: AGI

2025 is being the year in which OpenAI has ceased to be a technology company and has become a black hole that attracts capital, expectations and the destiny of companies that move billions, with a ‘b’. Sam Altman has designed a scenario where there are only two possible outcomes: AGI for them or collapse for everyone. Why it is important. OpenAI’s valuation has reached $500 billion as an unlisted company. It has moved more than a billion (also with ‘b’ and it is not a false friend of “billions”) in deals in recent weeks. Those figures only make sense if they get the AGI (Artificial General Intelligence). If not, everything explodes. The panoramic. A year ago, a round of 6.6 billion It seemed like an astronomical figure. Nine months later, 40 billion. Now we talk about 100 billion with NVIDIA. And so naughty. When we reach these magnitudes (and they are repeated) we stop talking about simple capital injections and talk about binary bets on the future of the world economy. The problem is that these figures have dragged other giants to the same precipice. The backdrop. Microsoft was the first to get hooked. Then he considered divorce and since then They are still together, but sleeping in separate beds. Furthermore, OpenAI has achieved something more dangerous: chaining Oracle, AMD and above all NVIDIA, the most valuable company on the planet on the stock market. If OpenAI clears its throat, all NVIDIA knobs jangle. And if NVIDIA falls, it drags down the S&P 500. The domino effect would reach pension funds, corporate spending and the US GDP. And from there, a chain effect for the economy of the rest of the world. behind the scenes. NVIDIA is not only funding OpenAI, it is also guaranteeing some of the debt the startup needs to build its own data centers. Is circular money: NVIDIA sends money in exchange for shares. OpenAI uses it to rent chips from NVIDIA. And those contracts allow NVIDIA to take on more debt to continue financing OpenAI. A loop that only works as long as the music continues playing. When the Titanic began to sink, the orchestra’s musicians were forced to continue playing. Yes, but. AI already works. It is already transforming sectors. Nobody doubts it. You don’t need to be AGI to have value. The problem is that OpenAI does need AGI to justify these insane valuations. They have set up a structure where any slowdown, any sign of doubt, will trigger panic. The money trail. Altman has found in Masayoshi Son to the perfect partner. The SoftBank founder has a history of big bets blowing up and miraculous saves (Alibaba, ARM). The Altman-Masa combination is a capital cannon pointing skyward. But it is also a detonator: if they fail, the explosion will be proportional to the ambition. According to Altman’s analysis, OpenAI has to beat Google before the latter’s TPUs hit the market and change the rules of the game. That’s why the rush. That’s why Atlas. That’s why the agreements with Broadcomconversations with Intel, promises to AMD. It’s not just about building the best AI, it’s about surviving until you get it. The big question. What if another macroeconomic event stops everything before superintelligence arrives? OpenAI is racing against the clock, it needs AGI before the economy trips over its own shadow. Meanwhile, the market rewards these alliances with instant increases. Oracle has multiplied its value just by announcing agreements with OpenAI. Capitalism of expectations: benefits are no longer needed, only promises of a future that does not yet exist. The same thing happens to others because OpenAI is the new King Midas. Decisive moment. This is no longer a bubble that can burst. It is a bet that can fail. And the difference matters. A bet drags down everything around it. OpenAI is already too big to fail without causing a cataclysm. Which makes it probable an Intel-type state bailout if things go wrong. Altman knows that many AI companies will disappear when the euphoria ends. Only the largest will survive. OpenAI plays at being so big that it has to be rescued. It’s already happened with the dotcoms‘. It can happen again. OpenAI has forced a binary scenario: either we achieve AGI or we face a brutal recession. AI works, transforms, improves processes. But that is no longer enough. We need trillions in value created. And if they don’t arrive in time, the collapse will be rapid. And ugly. In Xataka | AI is giving a second youth to unexpected actors: the old guard of enterprise software Featured image | OpenAI, Alexander Gray

OpenAI has become the “Fast Food” of AI. And that means that for Sam Altman the business is attention, not AGI

It was sung that OpenAI was going to launch its browser, so the Yesterday launch of the Atlas browser It didn’t take us too much by surprise. What is important is the fact that the company does not stop constantly releasing products and services. The pace is the most extraordinary we have experienced in recent years, and the obvious question is, what is OpenAI pursuing with this strategy? OpenAI is the great machine churros AI products of the world. In recent weeks we have seen how OpenAI has not stopped launching new AI services and products that have managed to flood the market. Some examples: And that’s not counting the Recently announced agreements with NVIDIA, AMD and Broadcom which make it clear that the pace of OpenAI announcements is absolutely dizzying: too many new things too often. Because? The hype race as a business priority. That extraordinary flurry of releases suggests that OpenAI’s big corporate priority is not so much the vaunted pursuit of AGI as it is dominating the conversation and, above all, the attention economy. What OpenAI wants is for us to be constantly talking about it, and the truth is that these launches are not exactly small: they all pose notable changes in its ecosystem and in the technology industry itself. Smokescreen. And such frenzy also acts as a strategic smokescreen. With this bombardment of releases (browser, applications, SDKs, improved models), Altman and his team not only generate more hype, but saturate the competitive space. Rivals barely have time to assimilate or replicate a feature when the next one has already been announced. Towards an operating system. The launch of Atlas is an especially significant move. With it it seems to be clear that OpenAI no longer wants to be a simple layer, the engine of AI, but a complete operating environment in the style of WeChat or the App Store. In fact wants to be the Windows of AIbut either it turns out well, or it is going to be the mother of all bubbles. Expectations attract new users (and investors). These constant movements also mean that these products also generate new expectations, even if only temporarily. OpenAI has managed to partly conquer the attention economy with launches such as Studio Ghibli style images or more recently with Sora. This has allowed it to attract millions more free users, which the company then tries to convert into paying users. Not only that: its growth also helps investors want to participate in the company’s multimillion-dollar investment rounds. And the AGI, what? And while all these launches are taking place, we see how the holy grail of AI, getting a general artificial intelligence (AGI), seems to take a backseat. It is as if that speech had become an empty mantra or a long-term goal that is not credible in the middle of this chaos. Altman has achieved replace philosophical conversation —the one that caused the hypothetical arrival of the AGI— due to a consumer conversation. The Fast Food of AI. The AI ​​ecosystem that OpenAI is creating has adopted a consumption pattern similar to that we experience on social networks: fast and ephemeral, based on the latest viral news. The Studio Ghibli-style visuals were exciting for a couple of weeks, and the same has happened with Sora 2, but that “wow” effect fades quickly. What is OpenAI doing to revive the hype again? Launch a new product. Atlas is the latest example. Seeking to be a de facto monopoly. With all these movements, OpenAI continues to attract more and more users and dominate the conversation and gain attention. That may not get you what you really need (income) at the moment, but it solidifies your absolute benchmark position and helps make it what you’re really looking for: the de facto monopoly of AI. Image | Mariia Shalabaieva In Xataka | ChatGPT will let you have erotic conversations. Welcome to emotional intimacy with an AI

OpenAI founder says AI does not imitate brains

Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, has offered a radically different view on the current state of AI in an extensive interview with Dwarkesh Patel. Faced with overwhelming optimism, he maintains that current systems are “digital ghosts” that imitate human patterns, not brains that evolve like animals. His prediction: AGI Functional will arrive in 2035, not 2026. Why is it important. Comparisons between AI and biological brains are dominating technical discourse and guiding many investment decisions. Karpathy argues that this analogy is “misleading” and raises unrealistic expectations. His experience leading autonomous driving at Tesla for five years has given him a unique perspective on the gap between killer demos and truly functional products. The difference. Animals evolve over millions of years, developing instincts encoded in their DNA. A zebra runs minutes after being born thanks to that “pre-installed hardware.” Language models learn by imitating text from the Internet without anchoring that knowledge in a body or a physical experience. “We’re not building animals,” he says. “We are building ethereal entities that simulate human behavior without really understanding it.” Ghosts. The problem of reinforcement learning. Karpathy says that the RL (reinforcement learning) current is “terrible” because it rewards entire trajectories instead of individual steps. If a model solves a problem after a hundred failed attempts, the system reinforces the entire path, including the errors. We humans reflect on each step and adjust. The collapse. The models suffer from “entropy collapse”: When they generate synthetic data to self-train, they produce responses that occupy a very small space of possibilities. ask ChatGPT one joke and you’ll get three repeated variants. Poor human memory is an advantage: it forces us to abstract. The LLM They remember perfectly, which allows them to recite Wikipedia but prevents them from reasoning beyond the memorized data. Between the lines. Karpathy saw that Claude Code and OpenAI agents proved useless for complex code during development. nanochat. They work with repetitive code that abounds on the Internet, but fail when faced with new architectures. “Companies generate slop“, he said. “Perhaps to raise financing.” The core. Their proposal: build models with a billion parameters (dwarf compared to those most used today) trained with impeccable data that contain thinking algorithms, but not factual knowledge. The model would look for information when it needs it, just like we do. “The Internet is full of garbage,” he explains. Giant models make up for that dirt with raw size. With clean data, a small model could feel “very smart.” The unexpected turn. Karpathy expects no explosion of intelligence, only continuity. Computers, mobile phones, the Internet: none have altered the GDP curve. Everything is diluted in the same ~2% annual growth. “We are experiencing an explosion,” he said, “but we see it in slow motion.” His prediction: AI will follow that pattern, spreading slowly through the economy, without causing the abrupt jump to 20% growth that some have anticipated. In Xataka | Privacy is dying since ChatGPT arrived. Now our obsession is for AI to know us as best as possible Featured image | Dwarkesh Patel

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.