DeepSeek V4 is here. It’s good news for efficiency and bad news for the myth

DeepSeek has published its V4 model under MIT license, with notable improvements in code and architecture designed for Chinese chips. It has also admitted, in its own technical report, that it is three to six months behind leading Western models. For a laboratory that A little over a year ago the global narrative of AI changedthat is much more than a nuance. Why is it important. DeepSeek became a symbol in January 2025. Its moment shook the markets, questioned the logic of the American technology stock market and convinced half the world that China could compete head-to-head on the frontier of AI, at a fraction of the cost. It’s not that V4 destroys that story, but it does complicate it a bit. China’s most important laboratory in AI arrives with a model that its own engineers describe as a step, not a leap. The context. V4 has taken longer than expected to arrive. According to sector sources collected for 36KrDeepSeek suffered a serious training failure in mid-2025 while trying to migrate its infrastructure from NVIDIA to Huawei’s Ascend chips. Internal opinions on technical direction were not aligned, and the founder, Liang Wenfengimposed conditions that were difficult to execute. The result: months of delay and a model that, furthermore, is still not multimodal, postponed due to lack of computing capacity and cash. Between the lines. The most interesting thing about V4 is in its architecture. The model introduces TileLang, a domain-specific language that allows low-level code to be decoupled from CUDA (the NVIDIA standard) and compile it for different chips. It also incorporates MegaMoE, a kernel designed to reduce latency in expert parallelism that already runs on Ascend hardware. But V4 training has continued using NVIDIA GPUs. Independence is, for the moment, more of an aspiration than an accomplished fact. turning point. While DeepSeek looked inward, the Chinese market has been reorganizing itself without it: Doubaofrom ByteDance, has become the most downloaded chatbot in China. MiniMax and Z.ai They have gone public. Alibaba has achieved great adoption thanks to vertical applications. DeepSeek never wanted to build a consumer product, and the market hasn’t waited for it. The internal bill has also arrived: the laboratory has lost key talent to Tencent, ByteDance and Xiaomi in practically all areas. Liang Wenfeng refused to give up 20% to an unidentified large investor. And now, for the first time, DeepSeek opens an external funding round. Main loser? The narrative of open source Chinese as a real alternative to the Western closed model has taken a hit. A Qwen employee has told 36Kr that “the golden age of nonprofit AI development is over.” The big question. It’s whether DeepSeek can regain lost ground. That depends largely on Huawei, whose Ascend 950 promises to scale well with V4, but 750,000 units are equivalent, adjusted for quality, to a week of American production. The gap is not closed with ingenious architectures. It is closed with silicon. In Xataka | Companies around the world face an irresolvable dilemma: either they are with China or with the US, with both it is no longer possible Featured image | Solen Feyissa

DeepSeek has just released a model that competes with Opus 4.6. It costs seven times less and runs on Chinese chips

They have passed 484 days since that “DeepSeek moment“, but the wait It seems to have been worth it, because we have the new DeepSeek V4 with us. We are facing an absolutely gigantic open weights model that once again promises to crack the foundations of the proprietary foundational models of Anthropic, OpenAI or Google. This is moving, gentlemen. Gigantic and open. DeepSeek v4 is an Open Source model and comes in two versions. The first is the Pro, with 1.6 trillion parameters (1.6T), of which it has 49,000 million active. The second is Flash, with 248,000 million parameters (248B, huge for a “Flash” model) of which 13,000 are active. More efficient than ever. Both versions they make use of a Mixture-of-Experts (MoE) architecture, which means that only a fraction of the parameters are activated in each inference. This allows the computational cost to be reduced significantly. Both versions support a context window of one million tokens—to include novels and novels at once as input—when in v3 it was 128,000 tokens. Furthermore, this model is much more efficient than its predecessor in computing per token: it requires only 27% of the operations per token and 10% of the KV cache compared to DeepSeek v3.2. Benchmarks promise. DeepSeek’s internal testing reveals that v4 Pro-Max (the best model with the highest reasoning ability) outperforms or is on par with Claude Opus 4.6 Max, GPT-5.4 xHigh, Gemini 3.1 Pro High, Kimi K2.6 and GLM 5.1. The results, however, are not independently verified, which means we should take them with caution. The numbers are still striking: in LiveCodeBench, a programming test, DeepSeek v4-Pro-Max achieves a 93.5% score compared to 88.8 for Opus 4.6 and 91.7% for Gemini 3.1 Pro. In other tests there is more variability, but at least on paper DeepSeek v4 Pro seems as good as Opus 4.7, which until now was the absolute benchmark. Much cheaper. But as happened with its previous version, the difference in price with those models from US companies is astonishing. As point the analyst Simon Willinson, the official prices of DeepSeek v4 Pro are 1.74 dollars per million input tokens and 3.48 dollars per million output tokens, up to almost seven times less than those of Opus 4.7 and up to almost 9 times less than those of the new GPT-5.5. With DeepSeek v4 Flash the cost is 0.14/0.28 dollars per million input/output tokens, when GPT-5.4 Mini costs up to 16 times more. The conclusion is obvious: if it really does what it says it does, the price is an absolute bargain. That is precisely the challenge: that real experience confirms what the benchmarks say. The hardware mystery. DeepSeek has not revealed what hardware has been used to train this version of its founding model. In the past they did admit that they had used NVIDIA’s H800s. Which yes it is known The thing is that the model has been developed to run on both NVIDIA and Huawei Ascend chips. This last has confirmed Baidu that its Ascend Supernode clusters based on the Ascend 950 will fully support DeepSeek v4 versions. Huawei support is “horrible” news for the US. In The Information they already commented that one of the reasons for the “delay” in the appearance of this model was to adapt it so that it worked without problems with Huawei chips. That support is according to Jensen Huang “horrible” news for the US, because it means that dependence on NVIDIA chips no longer exists or at least is reduced to a minimum. But. The launch comes at a difficult time for the company. Guo Daya, one of the people responsible for the v1 and v3 models, has signed for ByteDance to work on AI agents. Luo Fuli, who led the development of v2, joined Xiaomi last year. This launch also coincides with DeepSeek seeking external funding for the first time. They are expected to raise about $300 million and obtain a valuation of about $20 billion. according to The Wall Street Journal. From the surprise effect to the continuity effect. The launch of DeepSeek R1 in January 2025 was surprising because it demonstrated that China could train competitive models at a fraction of the cost of Western models. With DeepSeek v4 that surprise effect disappears to give way to the continuity effect. This model seems to maintain precisely what made the previous model famous: extraordinary power at a very low cost. Bad news for Anthropic. Such low prices are terrible news for Anthropic, which in recent weeks has been forced to execute a kind of “reduflation” of their new modelswhich are not more expensive but consume many more tokens. We’ll have to see if DeepSeek v4 Pro is as good as the company promises, but if it is, we’ll have another “DeepSeek moment” before us. Maybe not as notable as last year’s, but equally relevant. In Xataka | DeepSeek promised them happiness as the great Chinese AI. I didn’t count on a small detail: Kimi

DeepSeek promised them happiness as the great Chinese AI. I didn’t count on a small detail: Kimi

Just a year ago, DeepSeek was one of the biggest scares that Silicon Valley had received dwarves. A Chinese model trained with a fraction of OpenAI’s budget equal to GPT-4 in benchmarks. Upon its arrival the message seemed clear: Western dominance of AI had its days numbered. Today, the story stands, but not thanks to DeepSeek. The DeepSeek case. DeepSeek carries months late for its V4 and, to date, has already lost three of the authors of R1, the model that catapulted them to success. The monthly downloads fell 72% in the second quarter of the year, seeing how Doubao (ByteDanec) snatched the lead. With missed dates, usage errors due to cyber attacksand the difficulty of split from NVIDIA To bet almost entirely on Huawei’s Ascend chips, Chinese alternatives like Kimi have been gaining ground. Meanwhile, on the other side of China. Moonshot AI was not born surrounded by noise like DeepSeek. It was founded in March 2023 by three former colleagues from Tsinghua University: Yang Zhilin—PhD from Carnegie Mellon, former Google Brain and Meta AI—, along with Zhou Xinyu and Wu Yuxin. There were no visible or media faces behind it, only product. That product is Kimi, and in early January 2026 the company launched it in its K2.5 version. In code and video benchmarks managed to surpass GPT-5 and Gemini Pro 3with the key to Chinese AI: its API costs between 4 and 17 times less than OpenAI’s. Those responsible for Moonshot explained how Kimi was almost at Claude’s level in software development testing, encouraging the race for open models. The money arrived. The commercial results are what really attract attention. In less than 20 days Following the launch of K2.5, Kimi’s cumulative revenue exceeded everything billed during 2025. API’s international revenue increased fourfold since November of the previous year. The consequence in valuation has been dizzying: 4.3 billion dollars in December 2025, 10 billion in February 2026, 18 billion in March. Three months, valuation multiplied by four. Kimi has thus become the fastest decacorn in Chinese business history. The Chinese maelstrom. DeepSeek was born a year ago as the great revolution that questioned the closed model of Silicon Valley. It only took a few months for Moonshot to steal the limelight and manage to be on par with – or even above – giants like Google and OpenAI in the most used models in the world. In favor of DeepSeek, it should be noted that its objective is different: it does not follow the typical startup pattern with pressure for immediate monetization and it is a gigantic AI laboratory that can afford not to win in the short term. In Xataka | DeepSeek API: what it is, what it is for, prices and how you can get one to use in your projects

Anthropic just accused DeepSeek and other Chinese companies of “distilling” Claude

For months we have talked about the race between the United States and China to dominate artificial intelligence as if it were only a question of who trains the most powerful model or launches the next version first. But the pulse begins to move to another, more delicate area: that of the rules of the game. When one laboratory accuses another of extracting capabilities from its system to accelerate its own development, the discussion goes beyond the technical. That’s exactly what Anthropic just did by denounce “distillation” campaigns against his model Claude. The complaint. In a text published this Monday, the company claims to have detected “industrial-scale campaigns” aimed at extracting Claude’s capabilities. According to their version, the activities attributed to DeepSeekMoonshot and MiniMax reportedly involved more than 16 million queries, question and answer interactions, and were channeled through approximately 24,000 fraudulent accounts, in violation of their terms of service and regional access restrictions. The race and the suspicion. The announcement by the firm led by Darío Amodei occurs in a context of growing tension around the progress of Chinese AI. Let us remember that DeepSeek altered the Silicon Valley landscape a year ago with the launch of R1, a competitive model that was presented as Developed at a fraction of the cost of American alternatives. The impact was immediate on the markets and revived the political debate in Washington about the technological advantage over China. Distilling is not always cheating. Anthropic itself recognizes that distillation is a common technique in the sector. It consists, in simple terms, of training a less capable model using the responses generated by a more powerful one, something that large laboratories use to create smaller, cheaper versions of their own systems. The problem, according to the company, appears when this practice is used to “acquire powerful capabilities from other laboratories in a fraction of the time and at a fraction of the cost” that developing them independently would entail. In that case, distillation would cease to be an internal optimization and would become, always according to Anthropic, a way of taking advantage of the work of others. Recognizable pattern. The three laboratories would have used fraudulent accounts and proxy services to access Claude on a large scale while trying to avoid detection systems. The company details infrastructures, what it calls “hydra cluster”, extensive networks of accounts that distribute traffic between its API and third-party cloud platforms, so that when one account was blocked, another took its place. Anthropic maintains that what differentiated these activities from normal use was not an isolated query, but rather the massive and coordinated repetition of requests aimed at extracting very specific capabilities from the model. Three campaigns. Although Anthropic presents the campaigns as part of the same dynamic, it distinguishes relevant nuances. DeepSeek would have focused its more than 150,000 queries on extracting reasoning capabilities and generating safe alternatives to politically sensitive questions. Moonshot, with more than 3.4 million queries, would have been oriented towards the development of agents capable of using tools and manipulating computing environments. MiniMax would concentrate the largest volume, more than 13 million queries, and according to Anthropic’s account, it reacted in a matter of hours to the launch of a new system, redirecting its traffic to try to extract capabilities from its most recent system. A geopolitical issue. The company states that illicitly distilled models may lose safeguards that seek to prevent state or non-state actors from using AI for purposes such as the development of biological weapons or disinformation campaigns. It also argues that distillation undermines export controls by allowing foreign laboratories to close the gap in other ways, while at the same time recognizing that executing these large-scale extractions requires access to advanced chips, thus reinforcing the logic of restricting their availability while, at the same time, remembering that the risk would grow if these capabilities end up being integrated into military, intelligence or surveillance systems. Images | Xataka with Nano Banana Pro In Xataka | Seedance is the greatest brutality we have seen generating video. And it has an uncomfortable message: it has surpassed Sora and Veo without NVIDIA chips

DeepSeek is gaining users where the US has the most difficulty

about a year ago DeepSeek appeared on the radar of many people in the loudest way possible, with an impact that was noticed even on Wall Street. If the name sounds familiar to you, it comes from there. The interesting thing is that, twelve months later, its weight in the public conversation no longer seems the same, but that does not mean that it has disappeared from the board. In parallel, and according to the diagnosis that Microsoft now proposes, the Chinese startup continues to gain traction. The success of DeepSeek is worrying in the US. The warning comes from within the American ecosystem itself. Microsoft has warned that US AI groups face growing pressure from Chinese rivals in the battle for users in several markets, precisely because of the combination of “open” models and low prices. The winning strategy. What explains DeepSeek’s expansion has less to do with marketing and more to do with accessibility. The Redmond giant maintains in its report ‘Global AI Adoption in 2025‘that the company has reduced barriers to entry by offering a free chatbot on web and mobile, an especially attractive combination in cost-sensitive markets. DeepSeek also makes money. It is worth clarifying this so as not to be fooled: just because the chatbot is free does not mean that it does not have a business model. The firm founded by Liang Wenfeng distributes its technology with an open approach, with code under the MIT license and a separate licensing scheme for model weights. And, as is the case with most players in this industry, monetization is usually in the professional field: API accessthe interface that allows developers and companies to integrate these models into their own applications and services, is where much of the economic value is concentrated. Microsoft Map with Estimated DeepSeek Market Share The adoption map. The analysis itself places DeepSeek’s growth far from the markets where the technological narrative is traditionally decided, and breaks it down into two types of scenarios: emerging countries and countries where US services are limited or restricted. According to usage data, it is estimated that the Chinese group would have around 18% share in Ethiopia and 17% in Zimbabwe. And where American technological products are limited or restricted, the advance would be even greater, always according to these estimates: 56% in Belarus, 49% in Cuba and 43% in Russia. Target: Africa. Brad Smith, president of Microsoft, stated in an interview with the Financial Times thatif AI is to be deployed in Africa at scale, the problem is not just the software, but the infrastructure that supports it. According to their analysis, many African countries will need investment to build data centers and, in addition, mechanisms to subsidize the cost of electricity, one of the major operational limits. And here he introduces a relevant point: if the race depends solely on private capital, “it will not be enough” to compete with companies backed with a level of subsidy like the one that, he maintains, Chinese companies frequently have. A success that is still being measured. In essence, this case leaves a fairly clear idea: although DeepSeek sounds less popular today than it did a year ago, its approach is having a real impact in markets where it is not so easy for large American technology companies to deploy. It is an expansion that is driven more by accessibility than by narrative, and that is why it is also difficult to follow it from the West, until the data begins to appear. From here, the most interesting thing will be to see what happens in 2026: if DeepSeek manages to sustain that advantage and what other Chinese models, pushed by the same combination of openness, price and internal support, decide to follow in its wake. Images | Xataka with Gemini 3 Pro | Screenshot In Xataka | Anthropic has rewritten his 25,000-word “Constitution” for Claude. It is the manual for how AI should behave

it shoots up 500% and makes the creator of DeepSeek gold

Beijing’s quest for technological self-sufficiency has a new king: Moore Threads, the chip designer, has staged a historic stock market debut in Shanghai. Its shares soared more than 500% on its first day of trading. The euphoria has validated the strategy of a giant which, despite being on the US blacklist, has become one of the great hopes for breaking the semiconductor blockade. And in this maneuver, the great beneficiary has been the founder of DeepSeek. A debut and million-dollar profits. The IPO has not followed the usual channels. The China Securities Regulatory Commission gave the green light to the operation in just four months, a record time compared to the usual 470 days on average, something that underlines the state’s urgency to capitalize on the sector. According to SCMPLiang Wenfeng – through his fund – acquired more than 82,000 shares before the premiere. The result: a profit of almost $5.6 million in 48 hours. Nikkei Asia confirms that the company has reached a capitalization of 305 billion yuan (about $42 billion), becoming the fourth most valuable company on the STAR market. And it is not yet profitable: it hopes to be profitable in 2027. The pedigree of the alternative. The market is not buying just anything, it is buying the Chinese alternative to NVIDIA. Moore Threads is not just another startup; was founded in 2020 by Zhang Jianzhong, who was general manager of NVIDIA in China. In fact, this insider knowledge is what led the US to consider it a direct threat and include it on its blacklist in 2023. Its GPUs, such as the MTT S4000, are the spearhead of an industry that seeks to replace the H100 and H200—the latter yes it will arrive in China directly— Americans in state data centers, where the government already requires a 50% share of local chips for these crucial teams. It’s not just chips, it’s software. What makes Moore Threads dangerous to Jensen Huang’s business is not just the silicon, but its attack on an important technology for NVIDIA: CUDA. The Chinese startup has developed MUSA, a platform that allows you to recycle code written for NVIDIA and run it on your own GPUs. It is something that eliminates the main barrier to entry for Chinese companies that wanted to migrate but were trapped in the American software ecosystem. And it is also the missing piece in the puzzle of the historic alliance of Chinese companies forged to overthrow NVIDIA. The circle closes. The DeepSeek creator’s investment in Moore Threads is not reduced to financial terms. DeepSeek, which already hinted in August that I would no longer need NVIDIA chipsis collaborating closely with the chipmaker to optimize its AI models on domestic hardware. With an alternative to NVIDIA that triples its value and an AI capable of competing with Gemini and ChatGPT, China is building a closed ecosystem where hardware and software feed each other. It is a symbiosis that, in addition to uniting, shields. To the Chinese industry against any future sanctions from Washington. Cover image | Composition with images of Moore Threads and Matheus Bertelli for Pexels In Xataka | Cambricon Technologies: this company is China’s punch on the table to beat the US in AI

DeepSeek has launched its new reasoner model. It’s free and beats GPT-5

DeepSeek has introduced DeepSeek-V3.2 and DeepSeek-V3.2-Speciale. They are AI models that combine complex reasoning with the ability to use tools autonomously. Why is it important. The company of Hangzhou claims that DeepSeek-V3.2 matches the performance of GPT-5 in multiple reasoning tests. The Speciale model It reaches the level of Gemini-3 Pro and has achieved gold medals in international mathematics and computer science Olympiads. The context. DeepSeek surprised the world in January with a revolutionary model for efficiency and cost. Now it ups the ante with open source systems that throw down the gauntlet directly to OpenAI and Google in reasoning capabilities. Technical innovation. DeepSeek-V3.2 integrates “thinking” directly into tool usage for the first time. You can reason internally while running web searches, operating a calculator, or writing code. The system works in two modes: With visible reasoning (similar to the reasoning seen in ChatGPT and company). Or without any reasoning. The chain of thought persists between tool calls and is restarted only when the user sends a new message. How they have achieved it. Researchers have developed ‘DeepSeek Sparse Attention (DSA)’, an architecture that greatly reduces the computational cost of processing long contexts. The model maintains 671 billion total parameters but activates only 37 billion per token. In figures. DSA cuts the cost of inference in long contexts by approximately 50% compared to the previous dense architecture. The system processes 128,000 context windows tokens in production. Reinforcement training has consumed more than 10% of the total pretraining count. The team has generated more than 1,800 synthetic environments and 85,000 tasks to train agent capabilities. The results. DeepSeek-V3.2-Speciale has won a gold medal at the International Mathematical Olympiad 2025, the International Informatics Olympiad 2025, the ICPC World Finals 2025 and the Chinese Mathematical Olympiad 2025. Both models are available now. V3.2 works on app, web and API. V3.2-Speciale only by API, at least for now. Between the lines. DeepSeek has published the full weights and technical report of the training process. This transparency contrasts with what large American technology companies usually do. Even those that offer open source models such as Call, with an asterisk. The Chinese startup wants to demonstrate that open source systems can compete with the most advanced proprietary models. And it does so while continuing to reduce costs. Yes, but. The benchmarks Public settings do not always reflect performance on real-world tasks. Direct comparisons with GPT-5 either Gemini-3 Pro They depend on specific metrics that may not capture all relevant dimensions. Furthermore, the integration of tools in reasoner mode still needs to be tested in complex real-world use cases. The reduced cost is not as important if the quality of the responses does not hold up. In Xataka | DeepSeek Guide: 36 Features and Things You Can Do for Free with This AI Featured image | Solen Feyissa

There is already a first crack in Chinese technological optimism: DeepSeek

Chen Deli, senior researcher at DeepSeek, has admitted at a state conference who is “extremely positive about technology, but pessimistic about its impact on society.” It is the first time that a representative of the Chinese company has spoken publicly since February, when its founder met with Xi Jinping after provoking that world earthquake with the launch of R1. And he has done it with that pessimistic outlook. Why is it important. This message comes from a company that the Chinese government has turned into a symbol of technological capacity and resilience in the face of US sanctions. That one of its leaders recognizes great risks for employment is a notable turn in a country where the official discourse is usually triumphalist. The facts. Chen participated in the World Internet Conference in Wuzhen along with the heads of five other companies known in China as “the six little dragons” of AI. His diagnosis has a gloomy tone: in one or two years, AI will be good enough to start replacing human jobs. In a decade or two it could take care of the rest. “Society could face an enormous challenge,” has said. “Tech companies need to take on the role of advocate.” Between the lines. This is not an American CEO peddling apocalypse smoke to inflate his valuation. In China, the State regulates technology with a firm hand. When Sam Altman says that AI will “probably lead to the end of the world, but in the meantime there will be big companies,” it sounds like marketing. When a DeepSeek executive says it at a conference organized by the government, after many months of silence and after its founder met with Xi, it sounds like a party line. The context. DeepSeek exploded in January with DeepSeek-R1a low-cost, open-source language model that was on par with American leaders. Since then, absolute exit. The founder, Liang Wenfeng, has appeared only once in all this time: at a televised symposium with Xi Jinping in February. Neither Liang nor the company has made public comments since then, and they have skipped all major Chinese tech conferences. Yes, but. While sending this message of caution, DeepSeek is in the process of consolidating itself as a cornerstone of the Chinese AI ecosystem. Chip manufacturers such as Cambricon and Huawei have developed hardware compatible with their models. In September, the company launched an “experimental” version of its V3 modelnotable not so much for its efficiency as for creating an alternative to NVIDIA’s CUDA API and its support for Chinese GPUs. In August, the simple announcement of a model optimized for national chips shares of the sector skyrocketed in the local market. And now what. Xi Jinping has proposed a little over a week ago on the APEC forum that there should be a global body that governs AI, making it “a public good for the international community.” Now a DeepSeek representative talks about AI as a potential threat that requires a unified approach from the technology sector. The narrative is shifting from triumphalism to preventive regulation. Featured image | Xataka, DeepSeek In Xataka | We believed that no open model could outperform GPT-5. A Chinese startup proves us wrong

they use Huawei and DeepSeek chips

China’s race to get become technologically independent from the United States It is reaching the military sector. The military is accelerating the integration of artificial intelligence into its operations and most importantly: they are favoring national technologies. In the software, DeepSeek. In hardware, Huawei chips. what’s happening. the chinese army is using AI to support strategic decision making and target detection. According to an analysis of Reutersseveral studies and patents suggest that they are also applying it in new vehicles such as robot dogs and autonomous drones, all prioritizing the use of national technologies, both in software and hardware. Why is it important. China has already given steps to stop depending on Nvidiathe maker of the most powerful AI chips. This is one more step towards technological independence, but in a critical sector such as the military. The objective is to eliminate foreign influence in its defense infrastructure, just like the United States does. Huawei chips. Speaking to Reuters, the defense policy expert Sunny Cheungassures that since the beginning of this year the Chinese military has increased the number of contractors that exclusively use national hardware. That is to say, AI chips made by Huawei. Although the military still uses Nvidia chips (it is not known if they were imported before or after of the blockade), there is a movement towards the use of own chips. DeepSeek. At the beginning of the year, military experts in China assured that the military was testing DeepSeek integration. In May, researchers from Xi’an University showed a system based on DeepSeek capable of creating and analyzing 10,000 combat scenarios in just 48 seconds. Reuters analyzed several tenders awarded to various companies by the Chinese military and at least a dozen mentioned DeepSeek, while only one referenced Alibaba’s Qwen. It is clear which is the preferred model for the Chinese army. Robot dogs and drones. The documents analyzed by Reuters also suggest that the Chinese military is integrating AI into autonomous vehicles such as robot dogs. It is no secret, in 2024 the army itself published a video promoting robot dogs who moved in packs to eliminate explosives and other threats. The robots in the video were from the Chinese company Unitree, but there are also other national companies dedicated to the manufacturing of these vehicles such as Norinco, which confirmed in a technical report that they use Huawei chips. On the other hand, Deepseek is also being integrated into drones to give them the ability to recognize and follow targets with hardly any human intervention. Image | Wikipedia, Flickr In Xataka | Europe already has the future of war drones within its reach. And it is offered by a country accustomed to them: Israel

Hangzhou is the city of DeepSeek, Alibaba and Unitree without any of the typical Silicon Valley ingredients. His secret is another

Hangzhou, a city of 12 million inhabitants 180 km south of Shanghai, is home to a striking number of powerful technology companies: Seven reference technologies (the six ‘little dragons’ plus the giant Alibaba) in a city that does not have any of the elements considered essential in Silicon Valley: Abundant venture capital. Leading universities. Links between university and industry. And a robust industrial structure. How could you then Hangzhou emerge well? The facts. Venture capital is plummeting in China. Funds in yuan have fallen from 88.42 billion dollars in 2022 to 5.38 billion in 2024. Funds in dollars, from 17.32 billion to 750 million. Hangzhou has not been a major recipient of investment until last year, when its province –Zheijang– stood out with 41 new corporate venture capital funds. But it was only after Unitree or Game Science had gained national attention. Missing. Hangzhou has only one elite university – Zhejiang – compared to 26 in Beijing, 11 in Jiangsu or 10 in Shanghai. The admission rate at Tsinghua and Beijing Universities for students from the capital (0.85%) is almost ten times that of students from Zhejiang (0.09%). None of the founders of “the six little dragons” or Alibaba created their company directly from university. Liang Wenfeng founded High-Flyer, the hedge fund after DeepSeekeight years after graduating. Jack Ma was rejected for 30 jobs after finishing his studies. Yes, but. The city has innovated by doing away with those ingredients. The explanation offered by Zilan Qian, a researcher at the Oxford China Policy Lab, points out ChinaTalk to “flexible governance”: a model where officials adopt “waiters” and “babysitters” mentality that facilitate rather than control. The context. Hangzhou does not have the political, financial or industrial importance of first-tier cities, which has given it greater local autonomy to shape its technology sector. Zhejiang province was a pioneer since the 1980s in promoting private enterprise during the early phases of Chinese economic reforms. Jack Ma He tried to establish Alibaba’s headquarters in Beijing or Shanghai, but failed due to the cost of rent and bureaucratic barriers. In 2015, Ma explained her decision: “Beijing favors state-owned enterprises, Shanghai favors foreign companies, and Alibaba was nothing in their eyes. If we return to Hangzhou, we become the local only child who receives all the attention and support.” Hangzhou is part of the sometimes called “chinese technology triangle“(sometimes also”golden triangle“) along with Shanghai and Shenzhen. More than a geometric reality, the functional metaphor describes the complementarity of three cities: Shenzhen provides industrial capacity and hardware. Shanghai concentrates finances and internationalization Hangzhou stands out in the internet, AI and an ecosystem favorable to private companies. Each vertex of the triangle has different strengths that, combined, generate an ecosystem where geographical proximity facilitates collaboration and flow of talent between the three poles. Between the lines. The model is described as “market-oriented” but maintains a level of centralized governance. The Hangzhou government sees quality of life as a strategy to attract businesses and talent, but positions itself as an enabler, not a controller. The absence of state-backed research institutes and a strong industrial base contributes to the government’s humble attitude. If Hangzhou were more strategic or more industrial, DeepSeek might not have had the creative space to emerge and provoke the earthquake that caused in January. The narrative of “self-made industry” and “entrepreneurial bureaucracy” admits conflicting readings. What some interpret as facilitation, others read as a euphemism for “dirigiste intervention by the State”, with a very defined plan of action and long-term objectives. “Flexible governance” can be both real local autonomy… and dirigisme disguised as pragmatism. At least it is no longer “a city south of Shanghai” but “Alibaba City” or “DeepSeek City”. In Xataka | China is selling us a future full of humanoid robots. We have (many) doubts Featured image | JinHui CHEN in Unsplash

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.