There was a day when Depseek surprised half the world by demonstrating that you could go far with less. Today returns with V3.1 And a message that does not go unnoticed: the model has prepared for the next Chinese chip batch. We are not talking about an automatic market overturn, but about a concrete bet that points in an awkward direction for Nvidia and company. If that technical tune with the Chinese hardware It translates into performance, conversation about who feeds AI in China is going to sound very different.
According to the company’s own noteV3.1 opens a hybrid inference in the purest style GPT-5: the same system with two routes, Think (deep reasoning) and Non-Think (Quick response), Sygons from your website and app. The formulation is clear: “Hybrid Inference: Think & Non-Think, a model, two models.” The company also underlines that the version Think “Reach answers in less time” than your predecessor. That is, not only do pesos change, the inference modes that are already in service also change.
The phrase that frames everything: an FP8 “thought for national chips”
In a comment set in his latest publication in Wechat, Depseek writes: “EU8M0 FP8 is for the next generation of national chips.” That is the point that tense the rope: it suggests that the company has adjusted the data format, apparently a FP8 which label as EU8M0, to the next wave of Chinese processors. Bloomberg andReuters collect that message And they synthesize it: v3.1 is “personalized to function with next -generation AI chips Chinese. ”In other words, optimization oriented to the local ecosystem.


The original comment in Chino (left) and its Spanish translation with Google Translate (right)
FP8 is an 8 -bit format that weighs half that FP16/BF16. With native support, it allows more yield per cycle and less memory, provided that the climb is well calibrated. In the official Model Card of Hugging Face It is read that Depseek-V3.1 “has trained using the EU8M0 FP8 scale” format, which indicates that it is not only a packaging of weights, but that training and execution have been expressly adapted to that precision. The delicate part, and it is convenient to be prudent, is that everything points to a chips remittance that will be displayed in the future, since they can take advantage of this scheme natively.
So is this bad news for Nvidia? The data of the fiscal year that expired on January 26 indicates that China represented approximately 13% of the company’s revenues led by Jensen Huang. If part of the computation of AI in China Classic duo muta NVIDIA GPU + CUDA ECOSYSTEM To domestic solutions that work with the UE8M0 FP8 format and give good results (presumably chips ascend of Huawei), the demand for Western solutions could be eroded over time.
China meant about 13% of Nvidia’s income in the last fiscal year
All this happens on the US export controls board: restrictions that sought to stop China’s access to leading chips and that have also accelerated their commitment to self -sufficiency. This year the Trump Administration rehabilitated with conditions the export of H20a chip cut for China. Since then, the state of the H20 has been oscillating: among permits, Chinese regulatory pressures and Nvidia plans to present Blackwell -based alternatives. The background message is that the framework is political and changing, and any route that allows China to depend less on these windows becomes strategic value.
You have to remember another fact that helps to calibrate expectations. According to Financial TimesDeepseek tried to train his future R2 model with Huawei chips ascend to official instances and found persistent technical problems. He ended up returning to Nvidia for training, while he was still working on the Compatibility for inference. That episode does not invalidate the current strategy, but puts the bar: to completely migrate its processes is not simple, it requires, among other things, months of engineering. V3.1, therefore, it must be read as iteration. Now the company states that it has prepared its model for the next Chinese chips.


Matherena models scores
And here we have another interesting fact. Matharenaa platform linked to Zurich Federal Polytechnic School which evaluates models in real and recent mathematical competitions, places GPT-5 as a leader, with 90% in final response tests, already deepseek-v3.1 (Think) something behind although among the best models of the moment. This helps to locate the context: V3.1 Compete above.
Images | Xataka with Gemini 2.5 | Matharen and Deepseek screen catches
GIPHY App Key not set. Please check settings