The hole that the Hype Deepseek has caused Nvidia to assess (half billion dollars and climbing) It is somewhat deeper than a simple market adjustment: it is the end of an era in the AI industry.
Success can no longer be measured in invested dollars.
Why is it important. Until now, the dominant narrative in AI has been very simple: More money = better models. This equation has promoted stratospheric assessments and has justified mass investments such as The Stargate project and his half billion.
Deepseek has just demonstrated that this logic begins to become obsolete.
The contrast. OpenAI invests hundreds of millions of dollars in each iteration of GPT. Goal has dedicated billions to flame, also open source (With nuances), without leading in performance.
Deepseek has achieved equivalent or higher results with 5.6 million dollars. Efficiency has triumphed over financial muscle. Even if the 5.6 million have an extensive small print and the real cost is higher, that does not cancel its milestone in efficiency.
Between the lines. The market reaction, with generalized collapses in technological beyond Nvidia, reinforces the change of paradigm. Not only is Depseek built a good model, it has shown that the emperor is naked.
Huge investments in AI infrastructure, after all, could be based on erroneous assumptions about the relationship between spending and performance.
The money trail. The technical innovation of Deepseek –su architecture of ‘mixed experts‘or its reduced precision system – are a signal: the future of AI does not go through larger data centers, but make them smarter and more efficient.
And leave the great technological position on the other side of the Pacific:
- How to justify multimillion -dollar investments when a rival gets similar results with a cost fraction?
- What happens to valuations based on the assumption that AI requires continuous mass investments?
- Are they sustainable The margins of Nvidia In AI chips if the trend is towards efficiency?
Yes, but. Not everything is efficiency. The great will argue that their massive investments are justified by the need for scale and reliability.
Even here Deepseek poses more uncomfortable questions: are they really necessary 100,000 GPUS To train a good model … or have we been waste resources due to lack of innovation?
The next. The market is going to reassess the entire AI value chain. If the models can train with a fraction of the expected cost, what does that mean to …?
- Chips manufacturers such as NVIDIA and AMD.
- Infrastructure suppliers Cloud.
- Startups that have raised billions based on mass investment projections.
Even for projections of Energy consumption by AI training.
The next phase of the AI race may not be measured in Teraflops or in model sizes, but in innovations that improve efficiency. The race is no longer to see who can spend more, but to see who can spend less while getting more.
The arrival of Deepseek marks a milestone and the beginning of an era: one in which the competitive advantage will not come for having the deepest pocket, but the smartest idea.
This horse drop is already half a billion of dollars for Wall Street. For now.
Outstanding image | Xataka with Mockuuuups Studio
In Xataka | Deepseek is the fashion model. The problem is that nobody knows very well what you are doing with our data

GIPHY App Key not set. Please check settings