Chatgpt costs $ 700,000 a day, Deepseek just 83,000. A key is huawei and play cards differently

2025 began with a tsunami in the segment of the artificial intelligence. After months and months talking about different models and companies such as Google, Microsoft, Apple, Goal And, of course, OpenAIa Chinese company took off the manga Deekseekan AI that shook industry foundations. Beyond its possibilities or how good it worked, which stirred the waters were economic and hardware issues. Almost from the beginning the question of How China had taken out an AI like Deepseek With the hardware limitation they have due to the Commercial War with the United States and the impossibility of buying the most powerful graphs of Nvidia (although with controversy). To do this, the company defended that it had to pull ingenuity Thanks to an infrastructure of NVIDIA H800 chips and a training of more than 2,788 million hours with a ridiculous cost: 5.6 million dollars. And it seems scarce because Openai He invested about 100 million dollars to train GPT-4. Another melon is what it costs to keep it. As noted ReutersIf Chatgpt costs about $ 700,000 a day, Deepseek drops to $ 87,000. And here some things that have to be taken into account. Deepseek is 10 times cheaper to maintain than chatgpt, according to Depseek Last Saturday, and as noted ReutersDeepseek revealed some data on costs and income related to its V3 and R1 models. The first is a traditional, more conversational chatbot, resulting ideal for the writing and creation of content. R1, however, is a reasoning model. It stands out solving problems, using logic and is able to show step by step reasoning, using continuous learning. To compare with better known models, Depseek V3 would be like GPT-4 and R1 something similar to OPENAI O1. In the report, Reuters highlights that the theoretical relationship of Deepseek-benefit costs is up to 545% per day. Of course, the company itself warns that real income is significantly lower, but another pearl they have left since Deepseek is the cost of maintenance. Keep chatgpt working costs about $ 700,000 a day to OpenAI (at least two years ago). The reason is that the infrastructure of Microsoft Azure Serversit has a considerable energy cost, you have to pay wages and, obviously, all the power in hardware to process the consultations you receive every second. In yellow, the costs. In blue, theoretical income Deepseek “only” 87,072, a ridiculous price compared. A few days ago affirmed That, renting the H800 costs less than two dollars per hour and the estimated theoretical income is just over 560,000 dollars, which would add more than 200 million dollars in a year. In the upper graph, Depseek shows the cost of maintaining R1 and theoretical income thanks to the tokens that are generated, whose price depends on the moment of the day, being cheaper at night. They also clarify that Deepseek V3 is “significantly cheaper.” This opens more issues. One is how it is so cheap because training an AI, of course, is not. Ignoring the accusation of theft by OpenAIif Deepseek has not deflated the numbers, puts on the table a situation in which it is not needed so many graphic power to train an artificial intelligence. Here the key is the ‘reinforcement learning’, the way Deepseek has found do with much lessbut it should also be noted that, although during training the Nvidia chips are used for the R1 model, in the inference it is using The Ascend 910b of Huawei. Huawei chips are cheaper And, supposedly, more efficient, and this decision of Deepseek is almost more relevant than what may cost to maintain the system. The reason is that you can teach the rest of artificial intelligence companies that, perhaps, It is not worth using the latest generation GPUs For everything, but only for training that occurs sometimes counted, before the implementation of AI, and then use other more efficient and cheap GPUs for inference. This inference is what is done later, in a use that we could call “real” The training would be the equivalent of swallowing technical manuals in a five -year career and the inference such as implementing that knowledge and reasoning starting from the base you have and without having to learn them again. In the end, the controversy of five million dollars Deepseek is going to be there for a while, especially when we compare with OpenAi numbers, but it is clear that Depseek is doing things from another approach and can be a good mirror for companies that come behind. And, with one China very focused on the development of both AI as of Hardware for AIit can be the perfect ‘spearhead’ model. Images | Github (Deepseek), Xataka In Xataka | Deepseek has created another Milmillonaria fortune: Liang Wenfeng has become popular but its wealth is still a mystery

Operator works differently (and better) to other agents who see our screen. Your secret: Cu

We already have OpenAi’s agent. It is called Operatorand it is a system capable of seeing our screen and performing actions autonomously in the browser from our requests. It is something we had already seen with ‘Computer Use’ from Anthropic either Deepmind Marinerbut here the company led by Sam Altman has its own special ingredient. Computer-Useing Agent (Cua). Operator uses a model called Computer-Useing Agent (CUA) that is based on GPT-4O. CUA interprets screenshots and interacts with websites through the typical browser controls, such as a cursor or a mouse. How Cua works. As they explain In Openai’s documentationthis system processes those “raw pixels” of the captures that you make and use a mouse and a virtual keyboard to complete its actions. Once you have the screenshot, “reason” and follow a “thought” line in which the past actions take into account. Promising performance. There are several benchmarks since they allow to evaluate the ability of these agricultural models. According to them tests performed internally In Openai, CUA achieves a 38.1% performance in OSWORLD (Use of a computer in general) against platforms such as Anthropic, which achieves 22%. Humans, yes, achieve 72.4% on average, which makes it clear that these systems still have a lot of improvement margin. In the use of the browser, the Benchmarks Webarena and Webvoyager also allow Operator to score very high: 58.1% and 87% respectively, compared to 36.2% and 56% of their competitors. What about those catches that I collect operator. Operator continuously performs screenshots to “see” the browser interface with which he interacts. That browser does not run on our PC, but in a remote browser on OpenAI servers. User data, including these catches, are used according to OpenAI’s privacy policy. This is: they can be used to detect fraudulent activities and to improve the service. That implies that our data can be used to train and improve the model, although We can deactivate that option In operator settings. The user, yes, has the capacity for how long this data is stored in Operator. By default these data are saved until the user decides to delete them. An agent who asks for help (and confirmation) when he needs them. As we have seen in other agents such as ‘Computer Use’ of Anthropic, Operator is an agent who does not act crazy. If you meet an obstacle – like a captcha code or the request to introduce user and password on a website – you will ask that the user take control, and will also ask for the final user confirmation if for example we have to validate a reservation or purchase of a product that has sought Operator. The operator user can also take control at all times. This is how it works. Source: OpenAi Do not release the steering wheel. This reminds us of assisted driving systems such as Tesla FSD. It is true that it is able to take us from one place to another once we introduce the destination address, but it is important to continue paying attention and have our hands in the steering wheel in case they occur unforeseen. With Operator and the rest of this type agents something similar happens. There are things that cannot be done. At the moment Operator cannot complete specialized tasks such as managing complex calendar systems or interacting with very personalized or non -standard websites. You will also refuse to do some tasks with high risk of provoking damages. For example, send emails, make electronic transactions or delete calendar events. Its benefits and capabilities will increase, without a doubt, but they will gradually do so and always guaranteeing that the possibility of error is the least possible. Image | OpenAI In Xataka | The generative AI seems stagnant. Big tech believe they have an ace in the sleeve: “agents” who do things for us

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.