We knew that US Big Tech had a problem with the costs of their AI. DeepSeek has just shown to what extent

DeepSeek is the new darling of AI. This family of models, developed by a Chinese R&D laboratory of the same name, has achieved what seemed impossible: compete with the OpenAI or Meta models and do so, according to them, at a much lower cost. Is that true?

A development 18 times cheaper than GPT-4. The Chinese startup released DeepSeek V3 671B at the end of December 2024. Its gigantic model was trained in just two months with a budget of 5.58 million dollars according to SCMP and analysts cited in Financial Times. Its performance is comparable to OpenAI’s GPT-4, but the latter cost about $100 million to develop according to Sam Altman. That’s almost 18 times more if we take into account both the data revealed by SCMP and Altman’s estimates.

Screenshot 2025 01 27 At 10 33 19
Screenshot 2025 01 27 At 10 33 19

Comparative cost of the main chat and reasoning models today. DeepSeek’s price is incredibly lower than its competitors. Data: DeepSeek, OpenAI, Anthropic, Meta.

Amazingly cheap. The cost of DeepSeek’s API is incredibly low when compared to its competitors. If we take the data from DeepSeek, Goal, OpenAI, Google and Anthropic It seems to be clear that the cost of using DeepSeek through its API is much lower than that proposed by its rivals. We have included the cost of GPT-4o mini which seems to be the only one comparable, but its performance is much lower than DeepSeek V3.


Bench
Bench

DeepSeek V3 is superior to most of its competitors, although it is true that Meta has released for example Llama 3.3 in recent days and that comparison varies frequently.

And it is (theoretically) superior to all. As they point out on RedditDeepSeek V3 prices are promotional: starting February 8 they will be $0.27 per million input tokens (almost double) and $1.10 per million output tokens (almost four times more) . This makes the comparison somewhat better for the competitors, especially for Llama, the only one that can compete in cost although the Chinese model is superior to that of Meta (and almost also to the rest in many metrics) according to the benchmarks carried out in DeepSeek.

DeepSeek also “thinks” cheaper. The cost comparison is not only in favor of DeepSeek in the area of ​​traditional chatbots, but also in the area of ​​reasoning models. According to its internal benchmarks, the spectacular DeepSeek R1 It is significantly superior to OpenAI’s o1, but using the o1 API costs 27 times more than that of DeepSeek R1. Hallucinatory.

Price drop in sight. As expert Ethan Mollick points out, the market will adjust to these DeepSeek-driven price drops fairly quickly. According to their estimates, the cost of a GPT-4 level AI was reduced 1000 times in 18 months, and a 95% drop in the price of the reasoning models, which right now are clearly higher than the AI models behind ChatGPT, for example.

a chinese tsunami. The launch of the DeepSeek models is a great little revolution for all types of developers of AI-based solutions: they now have access to much cheaper models that are comparatively equal to or superior to those of the competition. This puts their rivals in a lot of trouble, and we will see how they react.

Good news for users. The truth is that for us, the users, as well as for the developers, this is great news, especially because these prices make access to these functions incredibly cheaper. The market has been following this trend clearly, but DeepSeek has made the jump in cost reduction suddenly drastic.

Image | Xataka with Freepik Pikasso

In Xataka | OpenAI prepares a PhD-level AI. It is so promising that he will first show it to the US Government

Leave a Comment