In the midst of the RAM crisis, Intel counterattacks with ZAM. It is the chip to break South Korean hegemony

Few would have guessed not so many years ago the Intel transformation. The company that will dominate consumer processors and servers for generations has been through a real ordeal through the desert under the rule of AMD. However, they have returned for their rights and not only –rescue through– have positioned themselves to be the great American foundry, but are looking to take a bite out of the gigantic South Korean RAM memory industry thanks to its new memory: ZAM memory. And its weapon is three-dimensionality. Z for ‘zolution’. Do you remember when, in math class, you drew the first cube? The X axis is east-west. The Y axis is north-south. What the square needed to become a cube is the Z axis, the one up and down. That’s what engineers SAIMEMORYthe company resulting from the collaboration between the Japanese SoftBank and Intel, have applied traditional DRAM memory with a single objective: to assault the enormous market for high-bandwidth memory, or HBMwhich dominates data centers. Puff pastry. A few months ago we told you that the two companies They had embarked on a joint path to stand up to the dominance of Samsung, SK Hynix and Micron in the creation of high-performance memory. lHBM memory is preferred for data centers because it has a beastly bandwidth that allows a greater number of simultaneous operations. It’s like a huge highway. However, it has limitations: it is expensive to produce, requires a lot of energy, and gets hot enough to require expensive dissipation systems. Conventional DRAM memory was not an alternative, but Intel and SoftBank began to ‘play around’ with stacked DRAM memory. It is like a puff of RAM memory (simplifying things a lot), whose main limitation came when connecting each of those thin layers of memory so that the final product had the same capabilities as that highway that is HBM memory. ZAM. After a few months of research, a few days ago at the Intel Connection in Japan, SAIMEMORY and Intel presented the ZAM prototype. According to the companiesa ZAM module can have a capacity of up to 512 GB, it is easy to produce because it consists of designing vertically stacked chips and most importantly: it can reduce energy consumption by 40% to 50% compared to conventional HBMs. If HBMs are expensive and take time to produce, ZAMs are cheaper, can be the solution to alleviate restrictions in the supply chain and, in addition, would lower the energy consumption of data centers (which is one of the problems they have), and are also easier to cool. At the moment, the company’s research points to a theoretical limit of 20 layers, but current designs move around 16 layers, so performance may be better if this current limitation can be overcome. Real alternative. Intel’s ambition is total, since they point out that their DRAM module joining technology allows them to offer two to three times the capacity of HBM modules while being up to 60% cheaper to produce. It all seems like a plus and doesn’t seem like bad technology when established giants in HBM memory creation like Samsung are also researching how to overcome the limitations of connections in stacked DRAM memory. The prototype | Photo by PCWatch Ambition. And, almost as important as the presentation of the ZAM prototype, is the alliance itself. Intel has been away from the memory market for many years. He tried it in the 80s and, again, years later with his Optane technology -that died miserably without making the slightest gap in the market. On the other hand, SoftBank represents a Japan that had the lead in this sector in the 1980s, but was overshadowed by emerging South Korean companies. In fact, Intel’s memories were eaten by the Japanese… and the Japanese by the South Koreans. SAIMEMORY has behind it not only those sharks, but other Japanese companies such as Fujitsu, Shinko Electric Industries, PowerChip Semiconductor Manufacturing or the University of Tokyo. And if ZAM memory works on a commercial level, it will not only be good news to alleviate the memory production chains (perhaps this will also alleviate the domestic market totally destroyed for the data center needs), but will mark the birth of a new and ambitious player who seeks to break the hegemony of the trident he currently leads. We will see it, of course, in a few years, since SAIMEMORY plans complete prototypes in fiscal year 2027 and begin commercialization in 2029. Image | Samsung, Maxence Pira In Xataka | The CEO of Nothing is clear that we do not need a high-end mobile phone every year. A mix of RAM crisis and common sense

break the hegemony of the US

China no longer chases Silicon Valley in Ia. In certain aspects it has advanced it and the earthquake that Deepseek caused at the beginning of the year was The beginning of the consolidation of a Big 5. After years of dispersion, these five companies have separated from the rest of the Chinese ecosystem to compete directly with American giants developing foundational models. Why is it important. Five Chinese companies have emerged as serious competitors from OpenAi, Google, Anthropic and company. Its strategy combines large investments, international talent and approaches that prevail the open source. The context. Two years ago, China lived a scattered battle, with dozens and hundreds of models and companies experiencing without clear leadership, without important differentiation. Today, the panorama is another: A quintet has developed technologies that rival great Americans. They symbolize the transition from China as a technological imitator to innovative that marks the rhythm. However, their models are not freed from government censorship. In addition, they do not compete internationally at the same level as Americans for a mixture of political, demographic and business model motifs. The five champions. Alibaba, Bytedonce, Stepfun, Deepseek and Zhipu are the new power of the Chinese power. Each has found its own niche: from open source domain to pure research. Alibaba: The King of Open Source Alibaba became one of the great world leaders of open development in AI together with Meta. His QWEN models series Master the open source ecosystem: The ten most popular Hugging Face models are based on secondary developments of Qwen technology. Alibaba’s strategy combines massive investment with radical opening. It will allocate 380,000 million yuan (about 52,000 million dollars) to research in AI and cloud infrastructure over the next three years, a budget at the height of great Americans. This commitment to open source It is not altruistic. By releasing its advances, it alibba accelerates world innovation while positioning itself as the premium infrastructure and services provider. Developers adopt free qwen … … But companies pay for cloud services, support and specific optimizations. The model demonstrated its effectiveness: while Openai and Anthropic maintain their closed systems, Alibaba builds an ecosystem where thousands of developers continually improve their base models. This distributed collaboration network can exceed the speed of innovation of closed laboratories, however well financed they are. Bytedance: giant and massive Bytedance brings to the group something other Chinese competitor possesses: Proven experience in global mass consumption products. The company behind Tiktok moved its knowledge of viral algorithms and user experience to the IA creating Doubaoa conversational platform that exceeds 100 million monthly active users according to the figures released by the company itself. This adoption scale provides Bytedance A unique advantage: large -scale real interaction data. While other laboratories train with static data sets, Bytedance optimizes their models with feedback constant of millions of real conversations, continually refining the behavior and usefulness of their systems. The company complemented its experience in consumer products with technical talent such as Wu Yonghui, who directs its research team in large models. Yonghui was a senior researcher at Deepmindso it brings direct knowledge of Silicon Valley developments. Your programming tool Bring It is oriented to business markets. Use OpenAI and Anthropic models (GPT / Sonnet) as a base, so it follows A pragmatic approach similar to perplexity: Use the best available technology while developing their own abilities. Stepfun: the discrete national team Stepfun is The most enigmatic member of the Big 5 Chinese. His low profile is deliberate. Based in Shanghai and backed by Shanghai State-Wned Capital Investment Co., it represents the most strategic facet of the Chinese national bet in Ia. He won the nickname of “King of Involution” In multimodal models, a recognition of their ability to overcome rivals in Benchmarks technicians Achieved outstanding positions both In Chatbot Arena International like first place In OpenCompassthe most reputed evaluation platform in China. His competitive advantage lies in a technical team led by Xiangyu Zhang, one of the four co -authors of the Paper Resnet. This work, which marked a turning point in the study of deep neural networksholds the record as the most cited article of any field published in the 21st century, something that provides immediate scientific credibility to Stepfun’s developments. State support allows Stepfun long -term research approaches without immediate commercial pressuressimilar to the model that followed laboratories such as Deepmind or Openai in its initial phases. Or Deepseek’s, but we talk about them later. ZHIPU: Pioneer in Smart Agents Zhipu represents the most direct connection between the Chinese elite academy and the commercialization of advanced AI. Incubated in the Tsinghua University, the most prestigious technical institution in Chinacombines academic rigor with business ambition. He wants to become the first Chinese startup of LLMS to go over. It is a pioneer in the development of AI agents. Your concept Phone Use And the launch of agents based on US laboratories demonstrates independent innovation capacity, not only to monitor western tendencies. Your most advanced product, Autogglm Ruminationrepresents what the company classifies as the first L3 level intelligent agent in the world. L3 is in any case an internal categorization of Zhipu, not a standard metric internationally recognized. The Immine Before the end of the year a milestone for the sector is: it will be the first public validation of the market value of the Chinese companies of AI. From there, the market will say. Deepseek: pure research as an advantage Deepseek showed that fundamental research approaches can compete directly with hyperfinanced commercial products. Your model R1 was an earthquake at the beginning of the year. And was achieved by a team of engineers released from immediate financial pressures and focused exclusively on technical efficiency and innovation. The company’s philosophy contrasts with the environment of Silicon Valley: Your engineers are encouraged to improve efficiency from a purely investigative perspectivewithout that financial pressure to generate short -term benefits. This environment is fertile land for risky experimentation and the exploration of unconventional approaches. The success of Deepseek … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.