Sam Altman has had another great idea to finally charge the user all the money he needs: a receipt at the end of the month

We are used to pay the electricity bill or water because they have become basic and totally universal goods. Well, Sam Altman, CEO of OpenAI, is clear that artificial intelligence will be exactly that: a commoditya basic and totally universal good. This implies, of course, that there will come a time when, just as we pay the electricity or water bill, we will pay the monthly AI bill. Paying for AI will be an everyday thing. Altman recently participated in an event in Washington DC and there raised an idea that has been around for a long time but is certainly gaining more and more strength: that AI will offer like electricity or water, on demand: as soon as you need it, it will be there for you. That, of course, will mean that just as we now pay for our electricity or water use, we will also pay for the AI ​​supply that we use. And we will do it at the end of the month with the traditional method: an invoice from our supplier. In Xataka The most powerful AI agent in the world has just arrived: the first thing it does is warn you that it is dangerous From consuming kW to consuming tokens. Thus, instead of paying fixed subscriptions as we usually do now when contracting ChatGPT Plus or Claude Pro, for example, what we will do is pay that monthly bill. The amount we will pay will be based on how many “tokens“(processing units) we have consumed to solve all types of tasks. We have power plants, we will have data centers. To Altman this speech fits like a glovebecause it justifies its AI data center megaprojects —and those of the rest of the industry—. If AI is to become that universal basic resource, we will have to have the infrastructure (the “AI power plants”) to sustain it. Without such infrastructure, Altman warns, the price of “intelligence” will skyrocket, turning it into an exclusive privilege for the richest or a resource rationed by governments. Compute Yottaflops. That race for infrastructure has already begun, and big technology companies are fueling it. The reason is simple: either they enter that maelstrom or they risk being left out if the AI ​​revolution actually becomes a reality. Lisa Su, CEO of AMD, explained in her opening talk at CES 2026 that the world will need more than “10 yottaflops” of computing – 10,000 times more than the existing AI capacity in 2022 – in the next five years to be able to meet the demand posed by this massive use of AI. Chips missing… and a lot of energy. The real obstacle to achieving such computing capacity not only lies in the chips – the memory crisis is a side effect of this – but also in energy. data centers they consume a lotwhich makes national electrical networks can finish not having sufficient capacity to supply said energy. OpenAI will not stop spending. Greg Brockman, president of OpenAI, explained in December that their projects, no matter how gigantic they may seem, will go further. Although the company has already committed to investing $1.4 trillion with its partners in data centers over the next eight years, OpenAI wants to “get ahead of the future, but I don’t think we can be, no matter how ambitious we want to dream of being right now.” That is to say, he believes that all his estimates and projects may end up being dwarfed by the true scale to which AI can reach. {“videoId”:”xa1wtpm”,”autoplay”:false,”title”:”Perplexity, Personal Computer”, “tag”:””, “duration”:”88″} Big Tech wants to bill you at the end of the month. Turn AI into a commodity For it to reach all homes would be an absolute triumph for the companies that are investing in it. The tech industry has not managed to direct its costs to the user other than in things like our internet connection or, at most, in our spending on streaming services —similar to current AI plans—. If it achieves that bill at the end of the month that hundreds (perhaps thousands) of millions of people would also pay, AI would become an extraordinary income machine. In Xataka | OpenClaw changed the rules of the AI ​​race. Technology companies already have their answer: copy it (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Sam Altman has had another great idea to finally charge the user all the money he needs: a receipt at the end of the month was originally published in Xataka by Javier Pastor .

make it so cheap that it is “invisible” to the user

DeepSeek is the spearhead when we talk about artificial intelligence from China. Not only does it have a great performancebut the own Microsoft has raised the alarm by pointing out that its policy is allowing it to gain users in markets in which others such as OpenAI has it more difficult. Other companies like Tencent or Alibaba are taking giant steps in the fight for AI, and a few days ago ByteDance –TikTok– presented a Seedance 2.0 which is impressive… and now it’s giving you headaches. But the great ones are not the only ones, and with a China focused on the development of robotics and AIwe must talk about other smaller ‘players’. Zhipu AI and MiniMax are two of the “tigers“that, in just a few years, have raised hundreds of millions of dollars and that have models that have a radically different philosophy to that of OpenAI and other Western giants. Their models are sold as life companions, tools that people can use on a daily basis without worrying about the price. And, within that speech, MiniMax just launched the M2.5, a model that wants to become a “digital employee” and that its managers have classified as its first “frontier model” so cheap that it is not worth measuring the price. AI too cheap to worry about price M2.5 is now official and, as stated South China Morning PostMiniMax did not want to waste the opportunity to launch it in a hectic week for the AI ​​industry in China. Technically, M2.5 is an LLM – large language model – that can handle about 230 billion total parameters, but only uses 10 billion per token. Being a Mixture of Experts system, each call only involves the experts directly necessary to resolve the request. Bringing the figure down to earth, that means that it is a capable model, but by user request does not use its full potentialwhich implies low inference costs and very low prices for users. Those responsible for it claim that the price is just one dollar per hour of continuous operation, spending 100 tokens per second. This means that you can have an “agent” working continuously throughout that time at a price between 10 and 20 times lower than other models such as Opus, Gemini 3 Pro either GPT-5. Such an aggressive policy makes M2.5 a model “too cheap to quantify,” according to those responsible. facilitating that mass adoption because the user can stop optimizing each order he gives to the AI. That phrase “too cheap to put in” it’s a wink to the historic comment that electricity from nuclear energy would be too cheap to measure. Internal score in different tests | Image: MiniMax And something important is that M2.5 is not a simple chatbot. It is available on platforms such as Ollama, HuggingFace, ModelScope in China or GitHub, and MiniMax itself points out that 30% of the company’s internal tasks are already carried out by M2.5 itself. Furthermore, 80% of new code is generated by the model. That is, it is more optimized for working alone than for chatting. This code created by code thing is not unique to M2.5, and Codex and Opus is also in this boat. The model has already been put to the test and, although in some tasks it achieves notable results, especially compared to other models open-weightits score is far from that of the closed models. In the results internal from the company itself, managed to double the score of the previous model, the M2.1, but as SCMP points out, these internal benchmark scores are difficult to verify independently. Internal benchmark in coding | Image: MiniMax But, in the end, whether more or less capable compared to other models, MiniMax M2.5 is another example of the strategy that China is pushing with artificial intelligence. While the United States is striving to demonstrate that it has increasingly powerful and capable proprietary models, AI is in a narrative in which aims to promote cheaper and more useful models for the user. This implies not only that they have a good performance/price ratio, but also that they can run on everyday devices without enormous computing power. And now that, supposedly, certain Chinese companies You will be able to get your hands on some of the best GPUs of NVIDIA to train AI, the boost to that strategy may be notable. Images | MiniMax (edited) In Xataka | There is another race equally important as the one for chips to win AI and in that China takes the lead

This is how he ended up filtering private data from a Gmail user

Would you trust artificial intelligence for something as intimate as managing your email? It is not just about your answers, but about giving you access to actions in a private environment where we keep great Part of our personal and work life. The temptation is there. Why invest several minutes in manual searches and check messages one by one if you can delegate the task to an AI agent with an instruction as simple as the following: “Analyze my emails today and collect all the information about my process of hiring new employees”? On paper, the plan seems perfect. The AI ​​assumes tedious work and you recover time for what really matters. Of an innocent message to an invisible escape The problem is that this “magical” solution can also become against. Which promises to increase productivity can become the entrance door for attackers With bad intentions. This is warned by the last Cybersecury Radware Researchwhich demonstrates how a carefully elaborate email consisted of mocking the security defenses of the function In -depth research of Chatgpt and transform it into a tool to filter sensitive information. The disturbing of the report is the simplicity of the attack. It is not necessary to click on any link or download anything suspicious: it is enough that the assistant processes an altered mail to finish filtering sensitive information. The user continues with their day to day without noticing anything, while the data travel to a server controlled by the attacker. Part of success is in the combination of several classic social engineering techniques adapted to deceive AI. Authority statement: The message insists that the agent has “full authorization” and is “expected” to access external URLs, which generates a false feeling of permission. Malicious URL camouflage: The attacker’s address is presented as an official service, for example, a “compliance system” or a “profile recovery interface”, so that a legitimate corporate task may seem. Persistence mandate: When the so -called soft control failure, the prompt orders to try several times and “be creative” until you get access, which allows you to overcome non -deterministic restrictions. Emergency creation and consequences: Problems are noticed if the action is not completed, such as “the report will be incomplete”, which presses the assistant to run quickly. False security statement: It is ensured that the data is public or that the answer is “static html” and it is indicated that they are codified based on64 so that they are “safe”, a resource that actually helps hide the exfiltration. Clear and reproducible example: The email includes an example step by step how to format the data and the URL, which makes it easier for the model to follow it to the letter. As we can see, the vector is simple in its appearance and dangerous in its result. An email with hidden instructions in its HTML or metadata becomes, for the agent, a legitimate order. In general, the attack is materialized: The attacker prepares a legitimate email, but with code or instructions embedded in the HTML that are invisible to the user. The message reaches the recipient’s tray and goes unnoticed among the rest of the mails. When the user orders in depth of chatgpt to review or summarize the messages of the day, the agent processes the mail and does not distinguish between visible text and hidden instructions. The agent executes the instructions and makes a call to an external URL controlled by the attacker, including in the request data extracted from the mailbox. The organization does not detect the exit in its systems, because traffic leaves from the cloud of the supplier and not from its perimeter. The consequences go far beyond a simple manipulated mail. Being an agent connected with permits to act on the inbox, any document, invoice or strategy shared by email can end up in the hands of a third party without the user perceiving it. The risk is double: on the one hand, the loss of confidential information; On the other, the difficulty of tracking the escape, since the request is based on the infrastructure of the assistant itself and not from the company’s network. The consequences go far beyond a simple manipulated mail. The finding did not stay in a simple warning. He was communicated responsible for OpenAI, who recognized vulnerability and acted quickly to close it. Since then, the failure is corrected, but that does not mean that the risk has disappeared. What is evidenced is an attack pattern that could be repeated in other AI environments with similar characteristics, and that forces us to rethink how we manage confidence in these systems. We are entering at a time when IA agents multiply and force to rethink how we understand security. For many users a scenario such as we have described is unthinkable, even for those who have an advanced level in computer science. There is no antivirus that free us from this type of vulnerabilities: the key is to understand what happens and anticipate. The most striking thing is that attacks begin to look more like an exercise of persuasion in natural language than to a line of code. Images | Xataka with Gemini 2.5 Pro In Xataka | China has the largest censorship system in the world. Now he has decided to export it and sell it to other countries

Even so, he is going better than expected, and the key is in user pocket

There are companies that appear everywhere, which have millions of users, which seem unbeatable. But behind that dazzling popular there is a fact that is often overlooked: They are not profitable. Some have been growing at full speed without ever generating benefits. Openai is one of those cases. His name is on everyone’s lips since Chatgpt He went into the world in 2022. But the truth is that, economically, he is still far from quading the accounts. Spend much more than you enter. And it will not be profitable for several years, if they are fulfilled Your own calculations. The paradox of success that does not give benefits OpenAi is everywhere. And chatgpt too. In just over two years it has gone from being a technical rarity to get into mobiles, computers and conversations. But that something sounds a lot It does not mean that it gives money. In fact, the accounts are still in red. Because the phenomenon is real, yes. But so is the expense. Training increasingly large models, keeping servers working and hiring talent is not cheap. According to The InformationOpenai would have lost about 5,000 million dollars in 2024. And this year the thing does not look much better. It is not a unique case. Spotify was founded in 2006 and It took twelve years to see benefits for the first time. Twelve. And only in 2024 managed to close A full year in positive. Have millions of users It does not guarantee May a company be profitable. Openai does not seem to be close to balance, but there is an important difference compared to a year ago. According to Financial Timesyour subscription income has shot. It has gone from generating $ 5.5 billion to approach 10,000. Half of that money comes from users who pay for chatgpt. Telegram’s case also helps put things in perspective. The application was born in 2013 and for more than a decade operated without generating benefits. Only in 2024, after exceeding 900 million users, Finally reached profitability. It took eleven years. Openai aims to follow the same path, but at another pace. The company already has told its investors that does not expect to be profitable before 2029. And for that to happen, it needs a very concrete figure: reach 125,000 million dollars in annual income. The company has already told its investors that it does not expect to be profitable before 2029. It is an ambitious objective, especially if we take into account that today is around 10,000 million. For multiply by more than ten Its turnover, OpenAi not only trusts that more users subscribe to Chatgpt, but that much of their income is related to their API. When we talk about the API we are referring to the system that allows you to integrate OpenAI models into third -party applications. Companies of all kinds, from banking to health, which can use various models of the company, such as GPT-4.1, to improve their benefits. Another important source, According to The Informationthey would be the calls Artificial Intelligence Agentsmore sophisticated tools that not only answer questions, but do complex tasks autonomously. Openai wants this to become its great premium product. It should be noted that many of the great technology (Microsoft, Google, Tesla) quote on the stock market and publish each quarter accounts. That forces to generate official reports, audited data and financial transparency. Things are different in the startup led by Sam Altman. OpenAi does not quote on a stock market and adopts a hybrid model: a non -profit entity (Openai, Inc.) controls a subsidiary with limited profittoday in the process of becoming in a public benefit corporation to capture greater investments than those received. Not being obliged to publicly audit your commercial accounts, OpenAi does not publish official figures. There are no quarterly reports of income, costs or losses. This is usual in many US private companies, which have greater financial confidentiality. Therefore, when we talk about current Openai numbers, we do it supporting ourselves in leaks, in Media estimates such as The Information or in data that the company itself shares selectively with investors. There are no periodic official reports, because they do not exist. There is no doubt that Openai has achieved something huge: he has put generative artificial intelligence in everyone’s mouth. But that does not guarantee income, much less benefits. Touch to wait to know if the company will meet its goals. For now, it seems to be on the right track. Images | Techcrunch (CC by 2.0) | Giorgio Trovato In Xataka | Apple is following the same pattern as Microsoft with the Internet in the 90s: Integration not exempt from risk

A user bought a state -of -the -art connected dishwasher. There began his nightmare

“I will not connect my dishwasher to your stupid cloud.” That has been The declaration of intentions From Jeff Geerling, a well -known YouTuber, after buying a new household appliance. It is something that we are living for years: the condemnation of the products that force us to spy on everything they do because if they do not work as they should theoretically. Premieving dishwasher. This week Geerling explained in his blog and also In a video on YouTube How he bought a new dishwasher from the Bosch brand. Specifically, one of the 500 series, because I had seen it recommended in Consumer Reports, a well -known organization of consumers and users in the US. After the installation the surprise came: I could not turn it on and put it to work without more. Wi-Fi and Mandatory Online Account. When trying to start it and start a clarified cycle, Geerling realized that he could not activate that option. After reading the manual he knew why. I needed to do two things first. The first, connect your dishwasher to your domestic wi-fi network. The second, create an account in the Bosch Home Connect service, which would allow you to access the option of clarified and some additional options, such as ecological or medium load mode. I could not even use the basic options via Bluetooth for example: it had to connect it to the Wi-Fi network and create the account yes or yes. If you want screen, it’s time to pay more. The dishwasher that bought this youtuber is a model that costs around $ 1,000. In spite of this – or precisely because of that – it did not have physical buttons, and the controls are tactile and are at the dishwasher door when opening it. There is also no screen that allows you to know how much time it remains for the washing cycle to end, for example. The upper series, the Bosch 800, do have that screen, but there is to pay $ 400. Why do I need a dishwasher app? As Geerling indicated, a local application with some type of direct wireless connection – but without connection to the outside or the rest of the local network devices – could make sense, but this did not have it. That you have to control virtually all the options and information of your washing machine mandatory through an app and a Wi-Fi connection was absurd according to your opinion. “I don’t need internet in my dishwasher,” he said. Few and bad options. This YouTuber is a technology expert and is accustomed to finding solutions to these types of problems. He talked about how maybe I could not use those advanced options, but if you spend $ 1,000 in a dishwasher it seems like not strange not to be able to use them. He also thought of using a VLAN network for IoT devices (a separate and isolated network). Nor hack it compensates too much. There is even an even more technical option, because someone has made reverse engineering of the Bosch protocol and has created a HCPY protocol To control these dishwashers without the manufacturer being able to do anything. However, that would force him to devote several more hours to putting everything in motion and all he wanted was to be able to use his dishwasher in a simple way. And he could return the appliance, but he had also invested time and effort to install it at home. A dishwasher should not connect to the Internet. For Geerling Bosch’s strategy is an absolute mistake. Among other things because if you depend on an application and a cloud service to use your dishwasher, Bosch will have to manage and maintain that service, which means two things: either they are selling the data of your use of the appliance, or will end up closing the service or migrating it to be a subscription service, something that has passed in similar scenarios. And a potential security hole. But it is also that a dishwasher is part of your domestic network opens another potential attack vector. If someone manages to find vulnerability in Bosch dishwasher, they will have access to the rest of your local area network. That It has also happened. Not onebut several times. Not everything needs to be connected. Geerling’s conclusion – with which we agree – is that IoT appliances and products should be designed with a maximum: first that they work at home, and then, if perhaps, they work in the cloud as something optional. This dishwasher is an example of a trend that can make sense in some areas, but not of course in this concrete. Image | Jeff Geerling In Xataka | We are running out of “dumb” appliances. The latest LG is a microwave with screen and speakers

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.