Sam Altman is laying the foundations for post-humanism as the philosophical current of the AI ​​era. It’s not good news

“But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you consume during that time to become intelligent.” These two sentences were enough delivered at the India-AI Impact Summit 2026to set the networks on fire. But Sam Altman didn’t stop there. “Not only that, it took the widespread evolution of the 100 billion people who have lived and who learned not to be eaten by predators and to understand science and so on to create you,” continuous. Therefore, the criticism about “how much energy is needed to train an AI model” They are extremely unfair. And it’s curious. The most “unpopular” technology in history… Not because it is not understandable (or even because it is not reasonable). It’s funny because Altman and the rest of the AI ​​bigwigs don’t seem to realize that they are making every effort to make AI extremely successful. unpopular among the population. Maybe it’s nothing new. Maybe it’s something similar to what happened with fabric making machine salesmen in the midst of the industrial revolution. Maybe it’s something similar to what motivated movements like that of the Luddites and the reason why dozens of historians rewrote their history as that of poor technophobes. What has changed is that we are now broadcasting it to the entire world — and live and direct. And very insistently. Although the discourse they use to ‘sell’ their technology to investors, technical elites and politicians around the world can only be understood at a public level as a very sophisticated way of saying: ‘human things get in the way.’ Or not so sophisticated, of course. …that is finding its “public” Team Mirai Over the last few years, in fact, the process has become less and less subtle and more blatant. It is not something that is limited to AI companiesbut it is an increasingly clear phenomenon: people speaking to a convinced hyperminority while alienating the vast social majority. And artificial intelligence is the tip of the spear. And it wouldn’t be a problem if there weren’t something else: the current great technological battle is not only technical, it is ideological, philosophical and of values. For the social changes they hope to be successful, it is necessary to move the ‘Overton window’ as quickly as possible. And it’s working. The best example is Japan: in the last election, Team Mirai ran. As Antonio Ortiz explainedis “a new Japanese party founded by engineers” with “a fairly accelerationist program: government chatbots and databases for transparency of donations and to make politics ‘faster’, reduce paperwork and achieve an increase in productivity to compensate for the labor shortage.” Well, those people just got 11 seats and 7% of the votes. In a way, two apparently contradictory processes are two legs of the same phenomenon: the discourse becomes more explicit as the population becomes more related. And changing the world is also (and above all) changing ideas. We tend to have a softened vision of social changes. However, there are several psychosocial processes that are usually key for these to be carried out: delegitimization (“what ruled until now no longer deserves obedience”), demonization (“those who hold these ideas are evil”) and dehumanization (“they are not human, moral norms do not apply”). You don’t always get to the last step, but some degree of moral disconnection it is necessary. And the artificial intelligence revolution (and all the tensions it brings) continues to show similar signs: for years, accelerationist and posthumanist groups have been ‘operating’ in the shadow of the great social and political discourses. Now, however, they face it: as the AGI approaches, everything we thought we knew (on a social, economic or institutional level) is useless. Or so they try to make us believe. And the best example is that of Altman: the CEO of OpenAI does not have to declare himself a posthumanist to lay the rhetorical tiles through which these discourses will travel: when you convert the human into energy cost comparable to an AI model, you are lowering the bar to justify “anything” in the name of efficiency But what exactly is all this talk about posthumanisms and accelerationists? Although they are two different philosophical traditions (posthumanism questions classical humanism and lays the foundations for its improvement, while accelerationism is a family of ideologies that propose accelerating certain dynamics – technological or capitalist to provoke radical social change), the truth is that in recent years they have ended up coming together. And, beyond that, they are providing the mental framework that allows certain decisions to be made that, in other scenarios, would not be socially acceptable. When the human being ceases to be the ideological ‘center’ of the system, acceleration becomes the great political principle and the AGI becomes the utopian destiny of a post-scarcity society (the modern equivalent of the Christian heaven or the Marxist classless society), everything that opposes this — rightly or wrongly — will become old, outdated or outdated. Altman’s statements in India are not an accident: they are part of the delegitimization of the current system of values ​​that the next revolution needs and, as we see, is already underway. Image | Xataka In Xataka | “A place of joy with pain”: the phrase that summarizes the Aztec philosophy to be happier in this life

Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

Sam Altman sat down over the weekend before his audience at X to answer questions about the agreement that OpenAI has just signed with the United States War Department. What came out of that session was a beautiful involuntary x-ray of the biggest contradiction in the sector at the moment. Why is it important. The CEO of OpenAI said he is terrified of “a world where AI companies act as if they have more power than the government.” The phrase sounds good, it is marketinian and seeks to elevate OpenAI’s position as a powerful but very responsible and honest group. The problem is the context in which he pronounces it: hours before OpenAI signed that agreement, The US government labeled Anthropic, its direct rival, a “supply chain risk” for refusing to sign under those same conditions. Altman went to put out the fire just as someone accused him of setting it. Between the lines. Altman’s speech rests on a premise that must be monitored: that a democratically elected government must always prevail over unelected private companies. It is a philosophically reasonable position, but he applies it selectively. Altman acknowledged that the deal “was rushed and the picture is not good,” and that OpenAI moved quickly to “de-escalate” tension between the Pentagon and industry. In other words, your company made a unilateral strategic decision about how the entire AI industry should relate to the military establishment. That doesn’t exactly sound like institutional deference. The contrast. Anthropic opted for something different: requiring explicit safeguards against the use of its AI for mass surveillance or autonomous weapons. But the government penalized her. OpenAI accepted a more ambiguous formula (“for all legal uses”) and won the contract. Various OpenAI employees signed a letter supporting Anthropic’s position. Claude became the most downloaded free application in the App Store that weekend from Apple, precisely surpassing ChatGPT. The market also has opinions. Yes, but. It’s fair to admit that Altman’s position has some internal logic: If AI is going to be integrated into military systems anyway, it may be preferable that it do so under negotiated conditions rather than under coercion. And he’s right about one thing: The labeling of Anthropic as a supply chain risk, a tool intended for hostile foreign suppliers, applied to an American AI security company is, in his own words, “an extremely frightening precedent.” The big question. Who really decides how AI is used in military contexts? The companies that build it, the governments that hire it, or the engineers who design it and who are increasingly organized to influence those decisions? Altman says he believes in the democratic process. But OpenAI negotiated privately, signed privately, and made only a fraction of the contract public. Democratic transparency starts there. In Xataka | Anthropic has become the Apple of our era and OpenAI our Microsoft: a story of love and hate Featured image | Xataka

Sam Altman has spent his entire life saying one thing and doing exactly the opposite. And this time it didn’t even take 48 hours.

A Mecano’s great song —I know, this is very Kiss FM—he said that ‘the face you see is a Signal ad’. And in case any of our painfully young readers don’t know, Signal is a brand of toothpaste. And if there is anyone whose face is exactly like that, it is Sam Altman, CEO of OpenAI, who with a perfect and convincing smile tries to convince the world that his company is just as perfect and convincing. For many people, today is not the case. what has happened. These days we have seen how the US and its Department of Defense (or War, as they like to call it now) have decided that if any AI company wants to work with them, they are going to have to let them use the AI ​​as they see fit. That we have to massively spy on people? He spies on her, totally, we have already done it. What should we tell AI to develop lethal autonomous weapons? Well too. Anthropic stands. But lo and behold, precisely the company that was working with the Pentagon He said that oranges from China. Anthropic, which had been collaborating with the Government for months—Claude was used for the arrest of Nicolás Maduro—, has made it clear that there are red lines that he will not cross. If Anthropic doesn’t want to, let OpenAI do it. At the Pentagon they have threatened to turn Anthropic into a pariah company, but at the moment they have not made any official move. What has happened is that the US Government has decided to change its technological partner. OpenAI has replaced Anthropic and appears to have reached an agreement to work with US defense and security agencies. Sam Altman seizes the opportunity. This has been indicated by Sam Altman, who in an ad on Twitter (I still resist calling her “X”) explained that her company had agreed deploy their models on the US War Department’s classified network. The curious thing is that this agreement establishes the same red lines that Anthropic had: no espionage on American citizens and no autonomous weapons. In the official announcement they even highlight that their agreement “has more safeguards than any previous agreement for classified AI deployments, including Anthropic’s.” There is, for example, one more requirement: that their models not be used for “social credit” systems with which citizens are rated based on the information collected from them. But. Although both Sam Altman and the company’s blog appear to place limits on the War Department’s use of its AI, the terms of that agreement contradict Altman’s claims. The announcement mentions a specific paragraph of the agreement that explicitly states the following: The War Department may use the AI ​​system for all lawful purposes, consistent with applicable law, operational requirements, and well-established security and oversight protocols. “The AI ​​system will not be used to independently direct autonomous weapons in any case where human control is required by law, regulation or Department policy, nor will it be used to make other high-risk decisions that require approval from a similarly competent human decision-maker.” Mass spying on American citizens is legal in certain scenarios as part of the Patriot Act that was passed after the 9/11 attacks, and that would allow AI to process data and communications collected by mass surveillance systems. Jeremy Lewin, a State Department official, has indicated that this agreement “flows from the pillar of ‘all legitimate use’”, and points out that what Altman proposes regarding red lines is not as clear-cut as it seems. Internal protests. Last Friday at 5:01 p.m., Anthropic was due to accept the Pentagon’s terms, but it did not do so. During that morning, several OpenAI and Google employees showed their support for the ethical and moral positioning of the rival company, and almost 800 of them (681 from Google, 96 from OpenAI) signed an open letter entitled “We will not be divided.” Altman says one thing, does another. In an interview with CNBCSam Altman said on CNBC that despite all the differences he has with Anthropic, “I trust them as a company, and I think they really care about safety.” On Thursday, the CEO of OpenAI sent an internal statement expressing his desire for “things to de-escalate between Anthropic and the Department of Defense.” The message came to nothing less than two days later, when he announced the agreement with the same Department. Altman says one thing, does another. In an interview with CNBCSam Altman said on CNBC that despite all the differences he has with Anthropic, “I trust them as a company, and I think they really care about safety.” On Thursday, the CEO of OpenAI sent an internal statement expressing his desire for “things to de-escalate between Anthropic and the Department of Defense.” The message came to nothing less than two days later, when he announced the agreement with the same Department. The world against OpenAI. Many have ended up criticizing OpenAI’s way of acting on social networks. On Reddit they appeared several messages that encouraged users to “Cancel ChatGPT” with thousands of positive votes and also thousands of comments in which the tone was indignant with the way in which OpenAI and Sam Altman have taken advantage of this circumstance. We have seen critical movements in the past —Facebook, Netflix—, but it usually happens that after these first moments, companies end up recovering from the criticism and even come out stronger for a simple reason: Human beings have very bad memories. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

AI consumes obscene amounts of energy. Sam Altman compares it to the cost of “training” humans

OpenAI CEO Sam Altman participated in an event organized by The Indian Express. During the interview made some striking statements, but the greatest of all of them was the one he dedicated to talking about what it costs to train an AI model. In fact, he complained about how many of ChatGPT’s energy consumption discussions they are unfair. Training humans also consumes a lot. The interviewer asked Altman about ChatGPT’s energy consumption and Sam Altman took a few seconds to answer the question, and then made a peculiar comparison (my bold): One of the things that is always unfair in this comparison is that it talks about how much energy it takes to train an AI model compared to what it costs a human to perform an inference query. But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you eat during that time before you become intelligent. And not only that, it took the widespread evolution of the hundred billion people who have lived and learned not to be eaten by predators and to understand science and so on to create you. The fair comparison is if you ask ChatGPT, how much energy does it take once their model is trained to answer that question compared to a human? And AI has probably already caught up in terms of energy efficiency if we measure it that way. A previous Epoch AI study corroborates that energy consumption during inference (when we actually use ChatGPT, for example) is low. Source: Epoch AI. Training is one thing, inference another.. The answer may be controversial, but to a certain extent it is logical: learning, both in the case of humans and AI, takes time and consumes many resources, but that cost is one thing and the cost of inference, of “applying that training”, is another. Once we have learned, it is not too difficult to answer things. This is what Altman is trying to point out here, who recognizes that AI does indeed consume a lot of energy in training, but that it has then become very efficient in the inference phase, when we actually use ChatGPT. The problem is that although Altman has already spoken that in inference consumption is minimal, does not provide evidence of this. The water problem is no longer a problem. He also spoke about the controversial water consumption that was theoretically carried out in large AI data centers. Although he acknowledged that this was a problem when “we used to use evaporative cooling in data centers.” Now, however, “we don’t do that,” he recalled, and made it clear that those accusations that “ChatGPT uses 17 gallons per query, or whatever” is totally false, “totally crazy, it has no connection with reality.” But again, there is still no official data from AI companies in this section. How much does AI really consume? The truth is that at this point we still do not have really clear data on how much the AI ​​consumes both in the training phase and in the inference phase. There are those who have investigated energy and water consumption and have made a mistake. wildly exaggerating the databut for example in the US, where a large number of data centers are concentrated, there is no legislation that forces transparency with those figures. Increasingly more efficient models and data centers. One of the most interesting studies was the one made by Epoch AI in February 2025, and at that time it was also concluded that AI did not actually consume as much as it was said to consume. In fact, it consumed relatively little and the models have only improved in efficiency. Chips and cooling systems have also improved, and although data centers have certainly require enormous amounts of energywe continue blindly in this section. In Xataka | Spain has a plan to capture more data centers than anyone else: “shield” them from energy costs

Elon Musk and Sam Altman predicted that AI will force the establishment of a universal basic income. The United Kingdom is already considering it

The main economic organizations in the world they don’t agree in their forecasts about what the real impact of the arrival of AI will be in the economic and labor sphere. A report The World Economic Forum estimated that AI will create 170 million new jobs. The problem is that until that happens, it will destroy about 92 million jobs. The US Senate consider that some 100 million jobs could be destroyed. Elon Musk and Sam Altman have repeated on several occasions that, to minimize this impact on society, it will be necessary to implement a universal basic income. In the United Kingdom, the government is debating measures to protect workers with the same idea. Millionaires ask for a basic income. Some of the top AI millionaires, such as Elon Musk, have predicted that universal basic income will be a reality in a future dominated by AI. While it is true that Musk’s vision is based on a vision more optimistic about the future in which “work will be optional” and it will not be necessary to save for retirement, the millionaire does not deny that universal income will be a necessary instrument to achieve it. Along the same lines, although with a more realistic vision, the CEO of OpenAI, Sam Altman, has funded studies on the effects of universal basic income in a scenario of job destruction and how this income helps recipients return to work train for new jobs. Companies do not need human labor. In one your blog postDario Amodei, CEO of Anthropic, warned that AI will have an “unusually painful” impact on the labor market. “AI is not a substitute for specific human jobs, but rather a general job substitute for humans,” the manager wrote. For this reason, this mechanism is increasingly seen as a transition instrument that allows employees laid off due to the arrival of AI to retrain to re-enter the labor market. A systematic review of the Department of Economics of the University of Huelva on more than 50 empirical casespoint out that universal basic income improves spending on basic needs without participants stopping looking for work, so it will be a way for employees to train for new jobs. jobs created by AI. The UK Government is debating it. In an interview for Financial TimesJason Stockwood, UK Investment Minister, has revealed that within the Government “it is definitely being talked about.” The minister noted that “without a doubt, we are going to have to think very carefully about how to smooth the process of disembarking those industries that disappear, through some type of UBI and some type of lifelong learning mechanism so that people can retrain.” According to published BloombergMorgan Stanley declared a net job loss of 8% in the UK in the last 12 months due to AI, the highest among large economies. Which explains the concern of the British executive to begin evaluating formulas that cushion this impact. A lifeline to keep them afloat. Unlike Musk’s “optimistic” vision, British representatives do not see the arrival of AI as a liberating element that makes work optional, but as a problem that will temporarily leave millions of workers who will need help unemployed. So declared it Sadiq Khan, mayor of London, concerned about the high rate of “white collar” unemployment that can cause the arrival of AI in a city like London. Liz Kendall, Secretary of Technology of the United Kingdom, spoke along the same lines, assuring that, although it is true that more jobs will be created than will be lost, there will be a transition period in which AI will be “a weapon of mass destruction of jobs. We will not leave people and communities to fend for themselves,” collected Guardian. The million-dollar question: who finances that income? It is easy to predict that universal basic income would be a solution for those who do not have a job to return to because AI has automated it. However, something more complicated will be determining who will finance that basic income. Bill Gates already gave some clues almost a decade agoensuring that they should be their own companies that use robots in their processes those that pay for that subsidy “if a robot replaces the work of a human, that robot must pay taxes like a human.” Ioana Marinescu, economist and associate professor of public policy at the University of Pennsylvania consider that taxing technology companies could slow down their implementation at the local level, so that this transformation process it would be more progressive increasing that transition period that would give time to the labor market to adapt. In Xataka | AI and its impact on the labor market: how the perception of its arrival varies by country, explained in a graph Image | Unsplash (Alexander Gray, enrico bet)

Sam Altman is trying to buy his own rocket company to compete with SpaceX. The key: data centers

The rivalry between Sam Altman and Elon Musk has just reached its highest point: space. And all so that OpenAI can deploy its own data centers in space. The news. As revealed by the Wall Street Journalthe CEO of OpenAI has been exploring the purchase of Stoke Space, a Seattle startup that develops reusable rockets, with the goal of building data centers in space. Although talks with Stoke Space cooled in the fall, the move confirms a trend we’ve been observing for months: Silicon Valley is outgrowing the Earth to fuel AI. Sam’s plan. According to the Journal’s sources, Sam Altman was not looking for a launch provider, but rather an investment that would ensure OpenAI majority control of Stoke Space. Stoke Space, founded in 2020 by former Blue Origin engineers, is developing a fully reusable rocket called ‘Nova’ to compete with SpaceX’s Falcon 9. So that. Altman maintains a tense rivalry with Elon Musk, so the logic of this move would be to reduce OpenAI’s dependence on Musk’s rockets in the event that it decided to deploy servers in space. But above that there is a purely energetic motivation. The computing demand for AI is so insatiable that the environmental consequences of keeping it on Earth will be unsustainable. In certain orbits, however, solar energy is available 24/7 and the vacuum of space offers an infinite heat sink to cool equipment without wasting water. The fever of space data centers. Altman is not alone in this race. What until recently seemed like an eccentricity has become a serious project for big technology companies: And what does Musk say? The irony of Altman pursuing his own rocket company is that the industry’s undisputed leader, Elon Musk’s SpaceX, already has the infrastructure in place. While his competitors design prototypes and seek financing, Musk has cut off the debate with his usual forcefulness: in the face of the discussion about the need to build new orbital data centers, He assured that there is no need to reinvent the wheel: “It will be enough to scale the Starlink V3 satellites… SpaceX is going to do it.” Images | Brazilian Ministry of Communications | Village Global In Xataka | Building data centers in space was the new hot business. Elon Musk just broke it with a tweet

Google has OpenAI cornered. Altman has reasons to go into crisis mode

Sam Altman has pressed the red button on OpenAI. After three years of being the startup that terrorized Google, it is now Pichai’s company that has the creator of ChatGPT on the ropes. Why is it important. OpenAI’s CEO sent an internal memo on Monday declaring “code red”: all resources are focused on improving ChatGPT. Projects like advertising in the free versionAI agents for health and purchasing or the deployment of the personal assistant Press are postponed. The company that forced Google to react is now the one that reacts. The backdrop. In 2022, Google panicked when ChatGPT changed our expectations about generative AI. Three years later, the roles have been reversed. Gemini 3, launched a few weeks ago, has surpassed OpenAI models in benchmarks key and in general it has arrived with a great reception. Marc Benioff, CEO of Salesforce, he said it bluntly a few days ago: “I’ve been using ChatGPT every day for three years. After two hours with Gemini 3, I won’t go back.” The figures. Google has gone from 450 million monthly active users on Gemini in July to 650 million in October. ChatGPT maintains leadership with more than 800 million weekly usersbut the speed at which Google is advancing is what has set off all the alarms. The difference in spending capacity is abysmal: Google brought in $102 billion in just the last quarterwith three quarters coming from advertising. OpenAI projects to reach 20 billion revenues this year, but will need 200 billion by 2030 to be profitable according to their own projections. Its infrastructure commitments add up 1.4 trillion dollars in the next eight years. The money trail. Google can afford to spend between $91 billion and $93 billion this year on AI infrastructure because it has a high-margin cash machine behind it. OpenAI, on the other hand, continues to rely on funding rounds while racking up record losses. Yes, but. OpenAI still retains advantages. Its 800 million weekly users represent a moat that can only be conquered person by person. ChatGPT is today synonymous with conversational AI in the same way that Google is with search. Changing the habits of hundreds of millions of users is much more difficult than convincing a few CEOs to switch chip suppliers. Between the lines. OpenAI’s refusal to monetize ChatGPT through advertising is increasingly inexplicable. Google dominated search precisely because it understood that an advertising model not only generates revenue: it improves the product. More users generate more feedbackmore purchasing signals allow for more personalized responses, and margins improve as scale grows. OpenAI has been avoiding this evidence for three years, but it has not stopped signing spending commitments exceeding one trillion. Unexpected twist. three years ago It was Google who declared code red in the face of the ChatGPT threat. The empire now counterattacks with an overwhelming structural advantage: control of distribution (Android, Chrome, Search, YouTube, Docs…), comfortable financial capacity and its own chips. OpenAI has users, but Google has the money, infrastructure and patience to fight a war of attrition. At stake. The question is whether OpenAI will survive as an independent company when its technological advantages evaporate and its business model continues to fail. Altman He usually says that he doesn’t like to think too much about the competition.. Those days are over. In Xataka | NVIDIA is the most valuable company in the world because it had no competition. Until Google started making chips Featured image | Google, OpenAI

Sam Altman does not take well to being asked about OpenAI’s astronomical losses

OpenAI has a serious liquidity problem. Earn a lotbut they are crumbs compared to what you need to enter. The numbers don’t come out, but that hasn’t stopped them from signing millionaire agreements. Brad Gerstner, an OpenAI investor and podcaster, asked Sam Altman about this problem and it seems he wasn’t amused. Defensive. They tell it in Futurism“How can a company with $13 billion in revenue commit to spending $1.4 trillion? You’ve heard the criticism, Sam,” asked Brad Gerstner on his podcast, which incidentally also included Satya Nadella listening intently to the exchange. Altman’s response was to become defensive: “If you want to sell your shares, I will find a buyer for you. Enough is enough.” The interviewer laughed it off, and Altman continued in a soft but clearly sarcastic tone: “There are many people who speak with great concern about our products and who would be happy to buy shares.” Click on the image to see the publication in X. Figures. OpenAI recently achieved a $500 billion valuationbecoming the most valuable private company in the world. Not only is it the most valuable, it has signed agreements with some of the most important tech companies such as NVIDIA, amd, Broadcom and just yesterday with amazon. Not only is it valuable, it The tech industry has tied its destiny to that of OpenAI. If it fails, the consequences can be catastrophic. Losses. Brad Gerstner is not at all wrong when he asks Altman about the inconsistency between his company’s expenses and profits. A few days ago, Microsoft presented its results and, given that they own 27% of OpenAI, in The Register They calculated how much money Altman’s company had lost in the last quarter. The figure is dizzying: 11.5 billion in just 90 days. It’s something to be worried about. For profit. After months of rumors about a impending divorcefinally Microsoft and OpenAI signed a kind of separation of assets. In parallel, OpenAI finally achieved his desired goal: finally become one for profit company. This measure gives them more flexibility to collaborate with third parties and make new rounds of investment. More wood. Despite the more than justified doubts about the astronomical spending on AI, the big technology companies announced a few days ago that They were going to spend even more than they planned. Investors are worried, and if not, tell Zuckerberg, who despite achieving record income, saw how its shares fell 8%. Question of faith. Sam Altman shares the same optimism and, responding to Gerstner, states that “revenues are growing rapidly (…) we are making an open bet that they will continue to grow.” Curious that he doesn’t give any figures to back it up. Image | TechCrunch, Flickr (License CC BY 2.0) In Xataka | The world of AI has a problem: there is no energy for so many chips

OpenAI has become the “Fast Food” of AI. And that means that for Sam Altman the business is attention, not AGI

It was sung that OpenAI was going to launch its browser, so the Yesterday launch of the Atlas browser It didn’t take us too much by surprise. What is important is the fact that the company does not stop constantly releasing products and services. The pace is the most extraordinary we have experienced in recent years, and the obvious question is, what is OpenAI pursuing with this strategy? OpenAI is the great machine churros AI products of the world. In recent weeks we have seen how OpenAI has not stopped launching new AI services and products that have managed to flood the market. Some examples: And that’s not counting the Recently announced agreements with NVIDIA, AMD and Broadcom which make it clear that the pace of OpenAI announcements is absolutely dizzying: too many new things too often. Because? The hype race as a business priority. That extraordinary flurry of releases suggests that OpenAI’s big corporate priority is not so much the vaunted pursuit of AGI as it is dominating the conversation and, above all, the attention economy. What OpenAI wants is for us to be constantly talking about it, and the truth is that these launches are not exactly small: they all pose notable changes in its ecosystem and in the technology industry itself. Smokescreen. And such frenzy also acts as a strategic smokescreen. With this bombardment of releases (browser, applications, SDKs, improved models), Altman and his team not only generate more hype, but saturate the competitive space. Rivals barely have time to assimilate or replicate a feature when the next one has already been announced. Towards an operating system. The launch of Atlas is an especially significant move. With it it seems to be clear that OpenAI no longer wants to be a simple layer, the engine of AI, but a complete operating environment in the style of WeChat or the App Store. In fact wants to be the Windows of AIbut either it turns out well, or it is going to be the mother of all bubbles. Expectations attract new users (and investors). These constant movements also mean that these products also generate new expectations, even if only temporarily. OpenAI has managed to partly conquer the attention economy with launches such as Studio Ghibli style images or more recently with Sora. This has allowed it to attract millions more free users, which the company then tries to convert into paying users. Not only that: its growth also helps investors want to participate in the company’s multimillion-dollar investment rounds. And the AGI, what? And while all these launches are taking place, we see how the holy grail of AI, getting a general artificial intelligence (AGI), seems to take a backseat. It is as if that speech had become an empty mantra or a long-term goal that is not credible in the middle of this chaos. Altman has achieved replace philosophical conversation —the one that caused the hypothetical arrival of the AGI— due to a consumer conversation. The Fast Food of AI. The AI ​​ecosystem that OpenAI is creating has adopted a consumption pattern similar to that we experience on social networks: fast and ephemeral, based on the latest viral news. The Studio Ghibli-style visuals were exciting for a couple of weeks, and the same has happened with Sora 2, but that “wow” effect fades quickly. What is OpenAI doing to revive the hype again? Launch a new product. Atlas is the latest example. Seeking to be a de facto monopoly. With all these movements, OpenAI continues to attract more and more users and dominate the conversation and gain attention. That may not get you what you really need (income) at the moment, but it solidifies your absolute benchmark position and helps make it what you’re really looking for: the de facto monopoly of AI. Image | Mariia Shalabaieva In Xataka | ChatGPT will let you have erotic conversations. Welcome to emotional intimacy with an AI

Sam Altman spent 6,500 million to create a Gadget from AI next to Jony Ive. Now they face a problem

In September 2023, Sam Altman, CEO of Openai and Jony Ive, former chief chief in Apple, gathered to devise a revolutionary device they called “the iPhone of the AI”. They were serious because this year Openai bought the Startup of IVE for a whopping 6,500 million dollars. A halo of smoke Mystery has surrounded this collaboration since it was announced and now that new news arrives is because the project has problems, some more serious than others. The “problems.” They tell it in Financial Times. Openai wants to launch its mysterious Gadget Superventas next year. However, sources close to the company ensure that the project has encountered critical problems that could delay its arrival. The team is having difficulties when deciding the personality that the wizard will have, something crucial for a device designed to always be on. There are also doubts about whether to do it as the classic assistants that are only activated when we invoke them or allow it to act by yourself when you consider it useful. The big problem. Assuming that Altman is right and his gadget becomes a global success, the most serious problem they face is that OpenAi does not have the necessary computing power to operate its models on a massive use device, and that costs money (something that OpenAi is not left over precisely). In the case of Alexa or the Google, Amazon and Google assistant they have plenty of computing to make them work without depending on anyone else. OpenAi has chatgpt, The most popular chatbot in the worldand need to work with external investments. First it was his Alliance with Microsoftafter SoftBank’s investment, according to Nvidia And the newly announced according to AMD. If the gadget they want to launch ends up being massive as Altman wants, the numbers do not come out. The device. We do not know what you will call or what design will you have. The details that Altman and Ive gave in their day were quite lazyIn fact, they focused more on saying what it will not be than what will be. It will be similar to the mobile, but it will not be a mobile. Nor will it have a screen, but we will communicate with him through cameras, microphones and speakers. And they will not be glasses either. Nothing concrete, but for the moment it reminds a lot of Ai pin of Humane, that He failed loudly. OpenAi goes for hardware. OpenAI enters $ 1,000 million per monthbut the speed to which money burns is much higher than the one that enters it and would need to enter ten times more to be profitable. Even so, the company is already valued in half a billion dollars. Entry into the hardware business makes sense as a way of justifying its value. Beyond the doubts that arise around this mysterious device (They are many), Openai is very serious about creating a hardware division. When buying the IVE startup, they added 20 employees and later They hired several Apple experts and also of the finish team in charge of the target Quest and smart glasses. We will have to keep waiting to see if it ends up materializing in In Xataka | Data centers for AI are an energy hole. Jeff Bezos’s solution: Build them in space

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.