Sam Altman has had another great idea to finally charge the user all the money he needs: a receipt at the end of the month

We are used to pay the electricity bill or water because they have become basic and totally universal goods. Well, Sam Altman, CEO of OpenAI, is clear that artificial intelligence will be exactly that: a commoditya basic and totally universal good. This implies, of course, that there will come a time when, just as we pay the electricity or water bill, we will pay the monthly AI bill. Paying for AI will be an everyday thing. Altman recently participated in an event in Washington DC and there raised an idea that has been around for a long time but is certainly gaining more and more strength: that AI will offer like electricity or water, on demand: as soon as you need it, it will be there for you. That, of course, will mean that just as we now pay for our electricity or water use, we will also pay for the AI ​​supply that we use. And we will do it at the end of the month with the traditional method: an invoice from our supplier. In Xataka The most powerful AI agent in the world has just arrived: the first thing it does is warn you that it is dangerous From consuming kW to consuming tokens. Thus, instead of paying fixed subscriptions as we usually do now when contracting ChatGPT Plus or Claude Pro, for example, what we will do is pay that monthly bill. The amount we will pay will be based on how many “tokens“(processing units) we have consumed to solve all types of tasks. We have power plants, we will have data centers. To Altman this speech fits like a glovebecause it justifies its AI data center megaprojects —and those of the rest of the industry—. If AI is to become that universal basic resource, we will have to have the infrastructure (the “AI power plants”) to sustain it. Without such infrastructure, Altman warns, the price of “intelligence” will skyrocket, turning it into an exclusive privilege for the richest or a resource rationed by governments. Compute Yottaflops. That race for infrastructure has already begun, and big technology companies are fueling it. The reason is simple: either they enter that maelstrom or they risk being left out if the AI ​​revolution actually becomes a reality. Lisa Su, CEO of AMD, explained in her opening talk at CES 2026 that the world will need more than “10 yottaflops” of computing – 10,000 times more than the existing AI capacity in 2022 – in the next five years to be able to meet the demand posed by this massive use of AI. Chips missing… and a lot of energy. The real obstacle to achieving such computing capacity not only lies in the chips – the memory crisis is a side effect of this – but also in energy. data centers they consume a lotwhich makes national electrical networks can finish not having sufficient capacity to supply said energy. OpenAI will not stop spending. Greg Brockman, president of OpenAI, explained in December that their projects, no matter how gigantic they may seem, will go further. Although the company has already committed to investing $1.4 trillion with its partners in data centers over the next eight years, OpenAI wants to “get ahead of the future, but I don’t think we can be, no matter how ambitious we want to dream of being right now.” That is to say, he believes that all his estimates and projects may end up being dwarfed by the true scale to which AI can reach. {“videoId”:”xa1wtpm”,”autoplay”:false,”title”:”Perplexity, Personal Computer”, “tag”:””, “duration”:”88″} Big Tech wants to bill you at the end of the month. Turn AI into a commodity For it to reach all homes would be an absolute triumph for the companies that are investing in it. The tech industry has not managed to direct its costs to the user other than in things like our internet connection or, at most, in our spending on streaming services —similar to current AI plans—. If it achieves that bill at the end of the month that hundreds (perhaps thousands) of millions of people would also pay, AI would become an extraordinary income machine. In Xataka | OpenClaw changed the rules of the AI ​​race. Technology companies already have their answer: copy it (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Sam Altman has had another great idea to finally charge the user all the money he needs: a receipt at the end of the month was originally published in Xataka by Javier Pastor .

Sam Altman is laying the foundations for post-humanism as the philosophical current of the AI ​​era. It’s not good news

“But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you consume during that time to become intelligent.” These two sentences were enough delivered at the India-AI Impact Summit 2026to set the networks on fire. But Sam Altman didn’t stop there. “Not only that, it took the widespread evolution of the 100 billion people who have lived and who learned not to be eaten by predators and to understand science and so on to create you,” continuous. Therefore, the criticism about “how much energy is needed to train an AI model” They are extremely unfair. And it’s curious. The most “unpopular” technology in history… Not because it is not understandable (or even because it is not reasonable). It’s funny because Altman and the rest of the AI ​​bigwigs don’t seem to realize that they are making every effort to make AI extremely successful. unpopular among the population. Maybe it’s nothing new. Maybe it’s something similar to what happened with fabric making machine salesmen in the midst of the industrial revolution. Maybe it’s something similar to what motivated movements like that of the Luddites and the reason why dozens of historians rewrote their history as that of poor technophobes. What has changed is that we are now broadcasting it to the entire world — and live and direct. And very insistently. Although the discourse they use to ‘sell’ their technology to investors, technical elites and politicians around the world can only be understood at a public level as a very sophisticated way of saying: ‘human things get in the way.’ Or not so sophisticated, of course. …that is finding its “public” Team Mirai Over the last few years, in fact, the process has become less and less subtle and more blatant. It is not something that is limited to AI companiesbut it is an increasingly clear phenomenon: people speaking to a convinced hyperminority while alienating the vast social majority. And artificial intelligence is the tip of the spear. And it wouldn’t be a problem if there weren’t something else: the current great technological battle is not only technical, it is ideological, philosophical and of values. For the social changes they hope to be successful, it is necessary to move the ‘Overton window’ as quickly as possible. And it’s working. The best example is Japan: in the last election, Team Mirai ran. As Antonio Ortiz explainedis “a new Japanese party founded by engineers” with “a fairly accelerationist program: government chatbots and databases for transparency of donations and to make politics ‘faster’, reduce paperwork and achieve an increase in productivity to compensate for the labor shortage.” Well, those people just got 11 seats and 7% of the votes. In a way, two apparently contradictory processes are two legs of the same phenomenon: the discourse becomes more explicit as the population becomes more related. And changing the world is also (and above all) changing ideas. We tend to have a softened vision of social changes. However, there are several psychosocial processes that are usually key for these to be carried out: delegitimization (“what ruled until now no longer deserves obedience”), demonization (“those who hold these ideas are evil”) and dehumanization (“they are not human, moral norms do not apply”). You don’t always get to the last step, but some degree of moral disconnection it is necessary. And the artificial intelligence revolution (and all the tensions it brings) continues to show similar signs: for years, accelerationist and posthumanist groups have been ‘operating’ in the shadow of the great social and political discourses. Now, however, they face it: as the AGI approaches, everything we thought we knew (on a social, economic or institutional level) is useless. Or so they try to make us believe. And the best example is that of Altman: the CEO of OpenAI does not have to declare himself a posthumanist to lay the rhetorical tiles through which these discourses will travel: when you convert the human into energy cost comparable to an AI model, you are lowering the bar to justify “anything” in the name of efficiency But what exactly is all this talk about posthumanisms and accelerationists? Although they are two different philosophical traditions (posthumanism questions classical humanism and lays the foundations for its improvement, while accelerationism is a family of ideologies that propose accelerating certain dynamics – technological or capitalist to provoke radical social change), the truth is that in recent years they have ended up coming together. And, beyond that, they are providing the mental framework that allows certain decisions to be made that, in other scenarios, would not be socially acceptable. When the human being ceases to be the ideological ‘center’ of the system, acceleration becomes the great political principle and the AGI becomes the utopian destiny of a post-scarcity society (the modern equivalent of the Christian heaven or the Marxist classless society), everything that opposes this — rightly or wrongly — will become old, outdated or outdated. Altman’s statements in India are not an accident: they are part of the delegitimization of the current system of values ​​that the next revolution needs and, as we see, is already underway. Image | Xataka In Xataka | “A place of joy with pain”: the phrase that summarizes the Aztec philosophy to be happier in this life

Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

Sam Altman sat down over the weekend before his audience at X to answer questions about the agreement that OpenAI has just signed with the United States War Department. What came out of that session was a beautiful involuntary x-ray of the biggest contradiction in the sector at the moment. Why is it important. The CEO of OpenAI said he is terrified of “a world where AI companies act as if they have more power than the government.” The phrase sounds good, it is marketinian and seeks to elevate OpenAI’s position as a powerful but very responsible and honest group. The problem is the context in which he pronounces it: hours before OpenAI signed that agreement, The US government labeled Anthropic, its direct rival, a “supply chain risk” for refusing to sign under those same conditions. Altman went to put out the fire just as someone accused him of setting it. Between the lines. Altman’s speech rests on a premise that must be monitored: that a democratically elected government must always prevail over unelected private companies. It is a philosophically reasonable position, but he applies it selectively. Altman acknowledged that the deal “was rushed and the picture is not good,” and that OpenAI moved quickly to “de-escalate” tension between the Pentagon and industry. In other words, your company made a unilateral strategic decision about how the entire AI industry should relate to the military establishment. That doesn’t exactly sound like institutional deference. The contrast. Anthropic opted for something different: requiring explicit safeguards against the use of its AI for mass surveillance or autonomous weapons. But the government penalized her. OpenAI accepted a more ambiguous formula (“for all legal uses”) and won the contract. Various OpenAI employees signed a letter supporting Anthropic’s position. Claude became the most downloaded free application in the App Store that weekend from Apple, precisely surpassing ChatGPT. The market also has opinions. Yes, but. It’s fair to admit that Altman’s position has some internal logic: If AI is going to be integrated into military systems anyway, it may be preferable that it do so under negotiated conditions rather than under coercion. And he’s right about one thing: The labeling of Anthropic as a supply chain risk, a tool intended for hostile foreign suppliers, applied to an American AI security company is, in his own words, “an extremely frightening precedent.” The big question. Who really decides how AI is used in military contexts? The companies that build it, the governments that hire it, or the engineers who design it and who are increasingly organized to influence those decisions? Altman says he believes in the democratic process. But OpenAI negotiated privately, signed privately, and made only a fraction of the contract public. Democratic transparency starts there. In Xataka | Anthropic has become the Apple of our era and OpenAI our Microsoft: a story of love and hate Featured image | Xataka

Sam Altman has spent his entire life saying one thing and doing exactly the opposite. And this time it didn’t even take 48 hours.

A Mecano’s great song —I know, this is very Kiss FM—he said that ‘the face you see is a Signal ad’. And in case any of our painfully young readers don’t know, Signal is a brand of toothpaste. And if there is anyone whose face is exactly like that, it is Sam Altman, CEO of OpenAI, who with a perfect and convincing smile tries to convince the world that his company is just as perfect and convincing. For many people, today is not the case. what has happened. These days we have seen how the US and its Department of Defense (or War, as they like to call it now) have decided that if any AI company wants to work with them, they are going to have to let them use the AI ​​as they see fit. That we have to massively spy on people? He spies on her, totally, we have already done it. What should we tell AI to develop lethal autonomous weapons? Well too. Anthropic stands. But lo and behold, precisely the company that was working with the Pentagon He said that oranges from China. Anthropic, which had been collaborating with the Government for months—Claude was used for the arrest of Nicolás Maduro—, has made it clear that there are red lines that he will not cross. If Anthropic doesn’t want to, let OpenAI do it. At the Pentagon they have threatened to turn Anthropic into a pariah company, but at the moment they have not made any official move. What has happened is that the US Government has decided to change its technological partner. OpenAI has replaced Anthropic and appears to have reached an agreement to work with US defense and security agencies. Sam Altman seizes the opportunity. This has been indicated by Sam Altman, who in an ad on Twitter (I still resist calling her “X”) explained that her company had agreed deploy their models on the US War Department’s classified network. The curious thing is that this agreement establishes the same red lines that Anthropic had: no espionage on American citizens and no autonomous weapons. In the official announcement they even highlight that their agreement “has more safeguards than any previous agreement for classified AI deployments, including Anthropic’s.” There is, for example, one more requirement: that their models not be used for “social credit” systems with which citizens are rated based on the information collected from them. But. Although both Sam Altman and the company’s blog appear to place limits on the War Department’s use of its AI, the terms of that agreement contradict Altman’s claims. The announcement mentions a specific paragraph of the agreement that explicitly states the following: The War Department may use the AI ​​system for all lawful purposes, consistent with applicable law, operational requirements, and well-established security and oversight protocols. “The AI ​​system will not be used to independently direct autonomous weapons in any case where human control is required by law, regulation or Department policy, nor will it be used to make other high-risk decisions that require approval from a similarly competent human decision-maker.” Mass spying on American citizens is legal in certain scenarios as part of the Patriot Act that was passed after the 9/11 attacks, and that would allow AI to process data and communications collected by mass surveillance systems. Jeremy Lewin, a State Department official, has indicated that this agreement “flows from the pillar of ‘all legitimate use’”, and points out that what Altman proposes regarding red lines is not as clear-cut as it seems. Internal protests. Last Friday at 5:01 p.m., Anthropic was due to accept the Pentagon’s terms, but it did not do so. During that morning, several OpenAI and Google employees showed their support for the ethical and moral positioning of the rival company, and almost 800 of them (681 from Google, 96 from OpenAI) signed an open letter entitled “We will not be divided.” Altman says one thing, does another. In an interview with CNBCSam Altman said on CNBC that despite all the differences he has with Anthropic, “I trust them as a company, and I think they really care about safety.” On Thursday, the CEO of OpenAI sent an internal statement expressing his desire for “things to de-escalate between Anthropic and the Department of Defense.” The message came to nothing less than two days later, when he announced the agreement with the same Department. Altman says one thing, does another. In an interview with CNBCSam Altman said on CNBC that despite all the differences he has with Anthropic, “I trust them as a company, and I think they really care about safety.” On Thursday, the CEO of OpenAI sent an internal statement expressing his desire for “things to de-escalate between Anthropic and the Department of Defense.” The message came to nothing less than two days later, when he announced the agreement with the same Department. The world against OpenAI. Many have ended up criticizing OpenAI’s way of acting on social networks. On Reddit they appeared several messages that encouraged users to “Cancel ChatGPT” with thousands of positive votes and also thousands of comments in which the tone was indignant with the way in which OpenAI and Sam Altman have taken advantage of this circumstance. We have seen critical movements in the past —Facebook, Netflix—, but it usually happens that after these first moments, companies end up recovering from the criticism and even come out stronger for a simple reason: Human beings have very bad memories. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

AI consumes obscene amounts of energy. Sam Altman compares it to the cost of “training” humans

OpenAI CEO Sam Altman participated in an event organized by The Indian Express. During the interview made some striking statements, but the greatest of all of them was the one he dedicated to talking about what it costs to train an AI model. In fact, he complained about how many of ChatGPT’s energy consumption discussions they are unfair. Training humans also consumes a lot. The interviewer asked Altman about ChatGPT’s energy consumption and Sam Altman took a few seconds to answer the question, and then made a peculiar comparison (my bold): One of the things that is always unfair in this comparison is that it talks about how much energy it takes to train an AI model compared to what it costs a human to perform an inference query. But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you eat during that time before you become intelligent. And not only that, it took the widespread evolution of the hundred billion people who have lived and learned not to be eaten by predators and to understand science and so on to create you. The fair comparison is if you ask ChatGPT, how much energy does it take once their model is trained to answer that question compared to a human? And AI has probably already caught up in terms of energy efficiency if we measure it that way. A previous Epoch AI study corroborates that energy consumption during inference (when we actually use ChatGPT, for example) is low. Source: Epoch AI. Training is one thing, inference another.. The answer may be controversial, but to a certain extent it is logical: learning, both in the case of humans and AI, takes time and consumes many resources, but that cost is one thing and the cost of inference, of “applying that training”, is another. Once we have learned, it is not too difficult to answer things. This is what Altman is trying to point out here, who recognizes that AI does indeed consume a lot of energy in training, but that it has then become very efficient in the inference phase, when we actually use ChatGPT. The problem is that although Altman has already spoken that in inference consumption is minimal, does not provide evidence of this. The water problem is no longer a problem. He also spoke about the controversial water consumption that was theoretically carried out in large AI data centers. Although he acknowledged that this was a problem when “we used to use evaporative cooling in data centers.” Now, however, “we don’t do that,” he recalled, and made it clear that those accusations that “ChatGPT uses 17 gallons per query, or whatever” is totally false, “totally crazy, it has no connection with reality.” But again, there is still no official data from AI companies in this section. How much does AI really consume? The truth is that at this point we still do not have really clear data on how much the AI ​​consumes both in the training phase and in the inference phase. There are those who have investigated energy and water consumption and have made a mistake. wildly exaggerating the databut for example in the US, where a large number of data centers are concentrated, there is no legislation that forces transparency with those figures. Increasingly more efficient models and data centers. One of the most interesting studies was the one made by Epoch AI in February 2025, and at that time it was also concluded that AI did not actually consume as much as it was said to consume. In fact, it consumed relatively little and the models have only improved in efficiency. Chips and cooling systems have also improved, and although data centers have certainly require enormous amounts of energywe continue blindly in this section. In Xataka | Spain has a plan to capture more data centers than anyone else: “shield” them from energy costs

Elon Musk and Sam Altman predicted that AI will force the establishment of a universal basic income. The United Kingdom is already considering it

The main economic organizations in the world they don’t agree in their forecasts about what the real impact of the arrival of AI will be in the economic and labor sphere. A report The World Economic Forum estimated that AI will create 170 million new jobs. The problem is that until that happens, it will destroy about 92 million jobs. The US Senate consider that some 100 million jobs could be destroyed. Elon Musk and Sam Altman have repeated on several occasions that, to minimize this impact on society, it will be necessary to implement a universal basic income. In the United Kingdom, the government is debating measures to protect workers with the same idea. Millionaires ask for a basic income. Some of the top AI millionaires, such as Elon Musk, have predicted that universal basic income will be a reality in a future dominated by AI. While it is true that Musk’s vision is based on a vision more optimistic about the future in which “work will be optional” and it will not be necessary to save for retirement, the millionaire does not deny that universal income will be a necessary instrument to achieve it. Along the same lines, although with a more realistic vision, the CEO of OpenAI, Sam Altman, has funded studies on the effects of universal basic income in a scenario of job destruction and how this income helps recipients return to work train for new jobs. Companies do not need human labor. In one your blog postDario Amodei, CEO of Anthropic, warned that AI will have an “unusually painful” impact on the labor market. “AI is not a substitute for specific human jobs, but rather a general job substitute for humans,” the manager wrote. For this reason, this mechanism is increasingly seen as a transition instrument that allows employees laid off due to the arrival of AI to retrain to re-enter the labor market. A systematic review of the Department of Economics of the University of Huelva on more than 50 empirical casespoint out that universal basic income improves spending on basic needs without participants stopping looking for work, so it will be a way for employees to train for new jobs. jobs created by AI. The UK Government is debating it. In an interview for Financial TimesJason Stockwood, UK Investment Minister, has revealed that within the Government “it is definitely being talked about.” The minister noted that “without a doubt, we are going to have to think very carefully about how to smooth the process of disembarking those industries that disappear, through some type of UBI and some type of lifelong learning mechanism so that people can retrain.” According to published BloombergMorgan Stanley declared a net job loss of 8% in the UK in the last 12 months due to AI, the highest among large economies. Which explains the concern of the British executive to begin evaluating formulas that cushion this impact. A lifeline to keep them afloat. Unlike Musk’s “optimistic” vision, British representatives do not see the arrival of AI as a liberating element that makes work optional, but as a problem that will temporarily leave millions of workers who will need help unemployed. So declared it Sadiq Khan, mayor of London, concerned about the high rate of “white collar” unemployment that can cause the arrival of AI in a city like London. Liz Kendall, Secretary of Technology of the United Kingdom, spoke along the same lines, assuring that, although it is true that more jobs will be created than will be lost, there will be a transition period in which AI will be “a weapon of mass destruction of jobs. We will not leave people and communities to fend for themselves,” collected Guardian. The million-dollar question: who finances that income? It is easy to predict that universal basic income would be a solution for those who do not have a job to return to because AI has automated it. However, something more complicated will be determining who will finance that basic income. Bill Gates already gave some clues almost a decade agoensuring that they should be their own companies that use robots in their processes those that pay for that subsidy “if a robot replaces the work of a human, that robot must pay taxes like a human.” Ioana Marinescu, economist and associate professor of public policy at the University of Pennsylvania consider that taxing technology companies could slow down their implementation at the local level, so that this transformation process it would be more progressive increasing that transition period that would give time to the labor market to adapt. In Xataka | AI and its impact on the labor market: how the perception of its arrival varies by country, explained in a graph Image | Unsplash (Alexander Gray, enrico bet)

Sam Altman is trying to buy his own rocket company to compete with SpaceX. The key: data centers

The rivalry between Sam Altman and Elon Musk has just reached its highest point: space. And all so that OpenAI can deploy its own data centers in space. The news. As revealed by the Wall Street Journalthe CEO of OpenAI has been exploring the purchase of Stoke Space, a Seattle startup that develops reusable rockets, with the goal of building data centers in space. Although talks with Stoke Space cooled in the fall, the move confirms a trend we’ve been observing for months: Silicon Valley is outgrowing the Earth to fuel AI. Sam’s plan. According to the Journal’s sources, Sam Altman was not looking for a launch provider, but rather an investment that would ensure OpenAI majority control of Stoke Space. Stoke Space, founded in 2020 by former Blue Origin engineers, is developing a fully reusable rocket called ‘Nova’ to compete with SpaceX’s Falcon 9. So that. Altman maintains a tense rivalry with Elon Musk, so the logic of this move would be to reduce OpenAI’s dependence on Musk’s rockets in the event that it decided to deploy servers in space. But above that there is a purely energetic motivation. The computing demand for AI is so insatiable that the environmental consequences of keeping it on Earth will be unsustainable. In certain orbits, however, solar energy is available 24/7 and the vacuum of space offers an infinite heat sink to cool equipment without wasting water. The fever of space data centers. Altman is not alone in this race. What until recently seemed like an eccentricity has become a serious project for big technology companies: And what does Musk say? The irony of Altman pursuing his own rocket company is that the industry’s undisputed leader, Elon Musk’s SpaceX, already has the infrastructure in place. While his competitors design prototypes and seek financing, Musk has cut off the debate with his usual forcefulness: in the face of the discussion about the need to build new orbital data centers, He assured that there is no need to reinvent the wheel: “It will be enough to scale the Starlink V3 satellites… SpaceX is going to do it.” Images | Brazilian Ministry of Communications | Village Global In Xataka | Building data centers in space was the new hot business. Elon Musk just broke it with a tweet

Sam Altman’s biometric project aimed to scan a billion eyes. It has not even reached 2%

World, Sam Altman’s ambitious project for verify human identity using iris scanshas managed to register 17.5 million people since its public launch in 2023. A figure that, although it may seem impressive, it barely represents 2% of its initial goal of one billion users. a promise. Altman’s idea was to create a global network of digital identity verified by ocular biometrics. To do this, users have to appear before a spherical device called Orb which scans your irises and generates a unique digital code, the World ID. In exchange, they can access an application with various services while also receiving cryptocurrency tokens. worldcoinwhich is currently worth about 60 euro cents per unit. “He is creating the disease, but he also wants to create the cure,” claimed a former employee of the company told Business Insider. Regulation. The project has run into a wall of institutional rejection. Just like share The medium, Spain, Hong Kong, Portugal, Indonesia, Germany and Brazil have imposed vetoes, suspensions or precautionary orders, while in Kenya it was banned a month after the launch. German authorities concluded last year that data protection measures “would not be sufficient to implement an appropriate level of security against cybercriminals or state attackers.” In October, the Philippines issued a cease-and-desist order, Colombia ordered to halt operations and delete data, and Thailand conducted raids arresting suspects for operating a digital asset business without a license. according to Business Insider. On the other hand, the Chinese Ministry of State Security warned that collecting iris data for cryptocurrencies could pose a threat to national security. A questioned model. Beyond the legal obstacles, some experts consulted in the middle they have questioned the viability of the project. Nick Maynard, vice president of fintech research at Juniper Research, said that “I don’t see a definitive use case that they have solved that is going to generate significant traction. They need a real purpose to exist, and that is not entirely clear yet.” The corporate structure is also complex, as Tools for Humanity (based in San Francisco and Munich) develops the technology; the World Foundation, from the Cayman Islands, controls the project; and World Assets Limited, in the British Virgin Islands, manages the token distribution. At the moment, the company has raised $240 million from investors such as Andreessen Horowitz, Bain Capital and Khosla Ventures, at a valuation of $2.5 billion. The expansion strategy. According to former employees who have contacted with Business Insider, the company opted for an aggressive growth strategy in emerging markets, prioritizing countries where the promise of free cryptocurrencies generated traction among economically vulnerable populations. In Mexico, local operators had to cover the majority of costs for scanning locations, although Tools for Humanity paid the rent for a year. In Argentina, external organizers they even sent buses with people who traveled to be scanned in exchange for money. Image: World Luis Ruben De Valadéz, who worked as head of operations in Mexico, commented to the media that had to raise about 100,000 Mexican pesos (about 4,705.75 euros at the exchange rate) from family and friends to open seven stores in Mexico City. As he shared, independent operators charged commission in Worldcoin, and it was common for exchange houses to emerge near Orbs stations where users immediately exchanged their tokens to obtain cash. The monetization dilemma. The company does not charge users to access its platforms, and its CEO Alex Blania has promised that they will not become data brokers. The company is known to earn revenue from verification fees (World ID fees) when external applications use its services. They also earn income through a program that allows them to rent or buy their own Orbs, and from processing fees on their World Chain blockchain. However, a former employee revealed The company expressed doubts about whether these fees would generate profits on their own, indicating that the financial future would depend above all on the continued flow of capital from investors. “I have trouble seeing it as a business. There is no incentive to buy or lease an Orb beyond making money by scanning tons of eyes, and for users it is to get more coins,” commented Martha Bennett, vice president and principal analyst at Forrester, told Business Insider. Bet on alliances. To accelerate growth, World announced partnerships with established companies. There is a pilot program with Match Group to verify Tinder users in Japan, and agreements with Stripe, Visa and the gaming company Razer. According to reported Semafor, Reddit was also in talks to use its verification services. Nikhil Bhatia, professor of finance at the University of Southern California and specialized in cryptocurrencies, commented to Business Insider that “it is difficult to judge something that is a crypto with a market capitalization of 2 billion as anything more than experimental or a fad. Worldcoin is not a contender in any way as a currency or asset against the dollar or Bitcoin.” And now what. The company has announced its intention to reach 100 million registrations over the next year, according to sources cited by the New York Post. But the road is full of questions. If you continue to require people to physically show up at your offices to have their eyes scanned, scalability could become complex. And if regulatory problems persist in the most populated markets in the world, it will be even more difficult for the company. World faces something common in many technological projects: with a powerful futuristic vision and plenty of capital, it does not seem to have a product that solves an immediate problem for the majority of users nor a clearly profitable business model. At the moment many people need to be convinced. In Xataka | The question is not whether AI will succeed in creating works of art. The question is whether we will consider them as such

Sam Altman does not take well to being asked about OpenAI’s astronomical losses

OpenAI has a serious liquidity problem. Earn a lotbut they are crumbs compared to what you need to enter. The numbers don’t come out, but that hasn’t stopped them from signing millionaire agreements. Brad Gerstner, an OpenAI investor and podcaster, asked Sam Altman about this problem and it seems he wasn’t amused. Defensive. They tell it in Futurism“How can a company with $13 billion in revenue commit to spending $1.4 trillion? You’ve heard the criticism, Sam,” asked Brad Gerstner on his podcast, which incidentally also included Satya Nadella listening intently to the exchange. Altman’s response was to become defensive: “If you want to sell your shares, I will find a buyer for you. Enough is enough.” The interviewer laughed it off, and Altman continued in a soft but clearly sarcastic tone: “There are many people who speak with great concern about our products and who would be happy to buy shares.” Click on the image to see the publication in X. Figures. OpenAI recently achieved a $500 billion valuationbecoming the most valuable private company in the world. Not only is it the most valuable, it has signed agreements with some of the most important tech companies such as NVIDIA, amd, Broadcom and just yesterday with amazon. Not only is it valuable, it The tech industry has tied its destiny to that of OpenAI. If it fails, the consequences can be catastrophic. Losses. Brad Gerstner is not at all wrong when he asks Altman about the inconsistency between his company’s expenses and profits. A few days ago, Microsoft presented its results and, given that they own 27% of OpenAI, in The Register They calculated how much money Altman’s company had lost in the last quarter. The figure is dizzying: 11.5 billion in just 90 days. It’s something to be worried about. For profit. After months of rumors about a impending divorcefinally Microsoft and OpenAI signed a kind of separation of assets. In parallel, OpenAI finally achieved his desired goal: finally become one for profit company. This measure gives them more flexibility to collaborate with third parties and make new rounds of investment. More wood. Despite the more than justified doubts about the astronomical spending on AI, the big technology companies announced a few days ago that They were going to spend even more than they planned. Investors are worried, and if not, tell Zuckerberg, who despite achieving record income, saw how its shares fell 8%. Question of faith. Sam Altman shares the same optimism and, responding to Gerstner, states that “revenues are growing rapidly (…) we are making an open bet that they will continue to grow.” Curious that he doesn’t give any figures to back it up. Image | TechCrunch, Flickr (License CC BY 2.0) In Xataka | The world of AI has a problem: there is no energy for so many chips

OpenAI has become the “Fast Food” of AI. And that means that for Sam Altman the business is attention, not AGI

It was sung that OpenAI was going to launch its browser, so the Yesterday launch of the Atlas browser It didn’t take us too much by surprise. What is important is the fact that the company does not stop constantly releasing products and services. The pace is the most extraordinary we have experienced in recent years, and the obvious question is, what is OpenAI pursuing with this strategy? OpenAI is the great machine churros AI products of the world. In recent weeks we have seen how OpenAI has not stopped launching new AI services and products that have managed to flood the market. Some examples: And that’s not counting the Recently announced agreements with NVIDIA, AMD and Broadcom which make it clear that the pace of OpenAI announcements is absolutely dizzying: too many new things too often. Because? The hype race as a business priority. That extraordinary flurry of releases suggests that OpenAI’s big corporate priority is not so much the vaunted pursuit of AGI as it is dominating the conversation and, above all, the attention economy. What OpenAI wants is for us to be constantly talking about it, and the truth is that these launches are not exactly small: they all pose notable changes in its ecosystem and in the technology industry itself. Smokescreen. And such frenzy also acts as a strategic smokescreen. With this bombardment of releases (browser, applications, SDKs, improved models), Altman and his team not only generate more hype, but saturate the competitive space. Rivals barely have time to assimilate or replicate a feature when the next one has already been announced. Towards an operating system. The launch of Atlas is an especially significant move. With it it seems to be clear that OpenAI no longer wants to be a simple layer, the engine of AI, but a complete operating environment in the style of WeChat or the App Store. In fact wants to be the Windows of AIbut either it turns out well, or it is going to be the mother of all bubbles. Expectations attract new users (and investors). These constant movements also mean that these products also generate new expectations, even if only temporarily. OpenAI has managed to partly conquer the attention economy with launches such as Studio Ghibli style images or more recently with Sora. This has allowed it to attract millions more free users, which the company then tries to convert into paying users. Not only that: its growth also helps investors want to participate in the company’s multimillion-dollar investment rounds. And the AGI, what? And while all these launches are taking place, we see how the holy grail of AI, getting a general artificial intelligence (AGI), seems to take a backseat. It is as if that speech had become an empty mantra or a long-term goal that is not credible in the middle of this chaos. Altman has achieved replace philosophical conversation —the one that caused the hypothetical arrival of the AGI— due to a consumer conversation. The Fast Food of AI. The AI ​​ecosystem that OpenAI is creating has adopted a consumption pattern similar to that we experience on social networks: fast and ephemeral, based on the latest viral news. The Studio Ghibli-style visuals were exciting for a couple of weeks, and the same has happened with Sora 2, but that “wow” effect fades quickly. What is OpenAI doing to revive the hype again? Launch a new product. Atlas is the latest example. Seeking to be a de facto monopoly. With all these movements, OpenAI continues to attract more and more users and dominate the conversation and gain attention. That may not get you what you really need (income) at the moment, but it solidifies your absolute benchmark position and helps make it what you’re really looking for: the de facto monopoly of AI. Image | Mariia Shalabaieva In Xataka | ChatGPT will let you have erotic conversations. Welcome to emotional intimacy with an AI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.