Huawei is about to deliver to China the ingredient he needs to dispute his leadership in AI

China has a very serious problem in the field of hardware development for applications of artificial intelligence (AI). At the moment the Chinese chips manufacturers They are not producing solutions capable of competing with The most advanced memories that manufacture the South Korean companies Samsung and SK Hynix, or the American Micron Technology. GPUs for Ia work side by side with HBM memory chips (High Bandwidth Memory). In fact, its performance is largely conditioned by these memories.

SK Hynix, Samsung and Micron are manufacturing on a large scale, although with different success12 -layer HBM3E memories. The two South Korean firms will produce large -scale HBM4 chips during the second semester of 2025, and Micron will do so in 2026. However, CXMT (Changxin Memory Technologies), one of the Chinese companies specialized in the production of memoirs, will launch Your first HBM3E chips in 2027. At the current competence with the US, China cannot afford to go two years behind the West in the production of HBM memories.

And it seems that this lag is about to disappear. Last week the Chinese state media Securities Times revealed that Huawei was about to present A technological advance that pursued China dependence on HBM memory chips from abroad. And today Digitimes Asia He has collected very important news: this company is already testing the first HBM3 chips manufactured entirely in China. As we have just seen, this milestone is crucial to this Asian country because presumably allows you to access a technology that is not currently within your reach.

Huawei does not rest

Huawei invests more than $ 25,000 million annually in The development of your hardware for AIso presumably it will not take long to match the benefits of the GPUs produced by NVIDIA or AMD. So far he had two Achilles heels: his inability to manufacture his chips using the equipment extreme ultraviolet photolithography (UVE) produced by the Dutch company ASML and its difficulty accessing the Integrated HBM memory circuits manufactured abroad. The latter will cease to be a problem.

Huawei invests more than 25,000 million dollars annually in the development of its hardware for AI

And, as we explained last week, during the celebration in Shanghai (China) of the Applications Forum and Development of Reasoning of Financial 2025 Huawei released an algorithm called UCM (Unified Cache Manager) that, according to this company, it is capable of drastically accelerate inference In the great AI models. A relevant note: inference is broadly the computational process carried out by language models with the purpose of generating the responses that correspond to the requests they receive.

To achieve its purpose, the UCM algorithm displays a very ingenious strategy: decide in what type of memory it is necessary to store each data taking as a fundamental indicator the latency requirements. In practice, this algorithm behaves as a gigantic cache that guarantees that each data will go to the right memory, including HBM3, with the purpose of minimizing latency during inference. If it is a very often used data, it will be stored in a very fast memory, such as HBM3. According to Huaweithis technology is able to reduce the latency of inference by 90%. Interestingly, this company plans to do the UCM Source Open Algorithm in September.

More information | Digitimes Asia

In Xataka | Nvidia has to deal with the absolute distrust of several US legislators. His plan in China is in danger

In Xataka | The US wants to end the chips for the Chinese that are sold abroad. And China knows how to defend oneself

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.