If anyone was waiting for the AI ​​bubble to burst, NVIDIA’s results have a message: sit tight

NVIDIA just published your results of the fourth quarter of its last fiscal year and has left Wall Street speechless. Revenues of $68.1 billion, a net profit that almost doubles that of the same period of the previous year, and a forecast for the following quarter that has far exceeded analysts’ expectations. And all this in a turbulent context where more efficient models and other alternatives are beginning to appear. The crash of DeepSeek is far away, and the demand for chips does not slow down. We tell you the numbers in detail. In case your position was not clear. Only a handful of companies in history have exceeded $100 billion in annual profit. Alphabet, Microsoft and Apple are in that club. NVIDIA has just joined them, with $120 billion in profits in the last twelve months, according to the report. The difference is speed: just three years ago, its annual profit was 4.4 billion. We can say with certainty that no technology company has ever grown so quickly on that scale. AI, and more AI. The engine that has driven these profits is its data center business, which generated $62.3 billion in the quarter, 71% more than a year ago. Within that segment, if we focus on their Blackwell chips, they have gone from entering 32.6 billion to 51.3 billion, while the networks (NVLink, Spectrum-X and InfiniBand) grow from 3,000 to 11,000 million. Gross margin is 75%, and earnings per share nearly double to $1.76 in GAAP terms (which is the official rulebook that companies follow to demonstrate transparent accounting). What Jensen Huang says. “Without computing, there is no way to generate tokens. Without tokens, there is no way to grow revenue.”, counted directly the CEO of NVIDIA in the meeting with investors. Their thesis is that in the new AI economy, computing power directly equates to revenue for their customers. That is why the large cloud service providers (Google, Amazon, Microsoft, Meta) continue increasing your capex budgetswhich together will exceed 500,000 million dollars in 2026 to build AI data centers. And NVIDIA is the main beneficiary of that expense. What DeepSeek has not broken, but accelerated. At the beginning of 2025, the emergence of the Chinese DeepSeek model generated an unprecedented tremor in the markets, leaving a simple question in our minds: if AI becomes more efficient, why do we need so many chips? The answer from NVIDIA’s results is that efficiency does not reduce infrastructure demand, it multiplies it. Every improvement in inference efficiency lowers the cost per token, encouraging more companies to deploy more AI applications, which in turn requires more compute. It’s like Jevons’ paradox, but applied to AI: efficiency expands the market instead of contracting it. Agentic AI as the next catalyst. On the same call with investors and analysts, Huang stood out that “enterprise adoption of agents is skyrocketing.” AI agentsthese systems that make decisions and execute tasks autonomously, require many more inference cycles than chatbots. They are the next step in the AI ​​value chain, and NVIDIA is once again in a privileged position. Colette Kress, CFO of the company, confirmed In addition, the first samples of Vera Rubin, the next generation of chips that will arrive later this year, have already been sent. China and the competition. Not everything is green. NVIDIA acknowledged that its forecast for the next quarter ($78 billion) does not include computing revenue in China. The company has generated just about $60 million from H20 chips since the Trump administration reapproved some sales in August 2025, according to SEC filings, and has yet to earn revenue from the most recently approved H200. Regulatory uncertainty with Beijing remains a small China in Huang’s shoe. In parallel, competitors such as AMD, Broadcom or Google’s own custom chips (TPUs) are gaining ground. But the NVIDIA CEO remains focused on his vision. And according to pointed at the meeting: “Every company depends on software, and all software will depend on AI.” As long as this is fulfilled, everything indicates that NVIDIA will continue selling the blades and picks. Cover image | NVIDIA In Xataka | NVIDIA was founded by three engineers, but only Jensen Huang remains CEO: “I wish I had kept some shares”

If you’re in a hurry to upgrade your PC, NVIDIA’s CEO has bad news: don’t be in a hurry

Talking about artificial intelligence is talking about Jensen Huang. The CEO of NVIDIA has become the figure of an industry: that of artificial intelligence. In large part, it is your company’s products that are driving the engine of the data centers and, at the same time, enormous semiconductor industries and memory are the essential components of NVIDIA GPUs. And if Huang has been commenting for a few weeks that this 2026 it’s going to need wafers and a lot of RAMhas now asked for patience with AI. Because he has another seven or eight years of unchecked climbing left. In short. When we talk about artificial intelligence, there are two poles. On the one hand, those who see signs of a bubble that will burst in the short term. On the other hand, those who defend the billion-dollar investment against all odds. In that boat is Jensen Huang, who recently noted in CNBC that this massive spending is “necessary and appropriate” because a “once-in-a-generation infrastructure” is being shaped. The most interesting thing is that, for him, this career will continue for several years, pointing that the investment and construction of infrastructure for AI has seven or eight years left. Mortars of money. In his statements, Huang pointed out that companies like Anthropic and OpenAI are making money despite everything invested and that their current brake is not so much the budget as the limit of computing power. That is why you want your suppliers –Samsung in HBM4 memories new generation or TSMC with the processors- increase the pace. It remains to be seen, however, if the pace can be maintained over the next five years. On CNBC, the CEO of NVIDIA pointed out that, despite the astronomical amount of money, the spending is sustainable. And proof of this is that it is increasing. If in 2025 the total spending of Big Tech did not reach 400,000 million, wait that this year the number of American companies will rise to 650,000 million. Only between Amazon and Alphabet -Google-, they will invest about 385,000 million. They see the AI ​​computing race as the next “whoever wins the most,” and none are willing to lose – DA Davidson analyst Gil Luria speaking to Bloomberg Parallel career. And that, as we say, in American companies, since China is the other pole in this race for artificial intelligence. The Asian giant is the birthplace of several extremely capable models, but also something that is missing in the United States: energy to feed the enormous needs of AI. China is betting on AI, but also on robotics, and all this at the same time buy NVIDIA products and develop your own semiconductor network with the goal of achieving technological sovereignty. It is another race parallel to that of the United States, and apart from the two poles of infrastructure development, we have particular names. That so much money is being invested means that opportunities are being created, and there are companies that have gone through a bad patch and want to surf the wave. For example, a Intel that, after needing a rescue by the United Statesis positioning itself as one of the great foundries in the United States. In addition, they are putting their foot in a segment that they had not explored, that of DRAM memory, and They are doing it with the Japanese giant SoftBank. Japan has not had a say in the memory industry since the 80s, when South Korea snatched their positionand now they may have another chance. Translation for the user. These are a couple of examples of companies that are taking advantage of the conditions to obtain financing and expand, seeking to position themselves in what they have determined is the future of the technology industry. With that amount of money and investment, there is a question you may be asking yourself: will I be able to buy a PC? The answer It is not hopeful. Giants like Micron -one of the heavyweights in the RAM segment- They are investing a lot to expand facilities and be more capable when creating memories, but they will not be for us: they will be for data centers. If the end of 2026 or 2027 was targeted as the end of the component crisis like the RAM or SSD (which are still components with memory modules), now it is Lip-Bu TanCEO of Intel, who states that It won’t be until 2028at the earliest, when we can see a horizon in the current panorama. So, yes, the entire tech industry has turned to AI and those that can increase their production of key components will do so over the next few years. The issue is that they are going to focus on components that users neither care about nor care about, neglecting those that we really need on a day-to-day basis. AND an example is NVIDIA itself. Image | NVIDIA In Xataka | Apple has been the industry’s first customer for decades. AI is relegating it to the background

Intel refuses to be left out of the AI ​​race. Your next move points directly to NVIDIA’s territory

The AI ​​fever is not only redefining software, it is also turning the map of power in the chip industry upside down. On this new board, the GPU has become the essential engine for building models and scaling data centers, to the point that demand has skyrocketed and placed its main manufacturers in a dominant position. For Intel, the diagnosis is difficult but evident: if the next decade of computing is decided in this area, it is not enough to protect the kingdom of the CPU. Intel’s move. The Santa Clara company has chosen a very specific setting to begin organizing its speech. During an AI Summit organized by Cisco, the company’s CEO, Lip-Bu Tan, said that Intel will start to produce GPUs and has just hired the “chief GPU architect” who will lead that effort. The manager avoided giving details about the name, but he did leave a message consistent with the moment in the sector: the GPU matters and will continue to matter. The missing piece. According to Reutersthe talent hired by Intel is Eric Demers, from Qualcomm. On the other hand, the initiative would fall under the umbrella of Kevork Kechichian, executive vice president and head of Intel’s data center business, incorporated in September within the framework of a series of hires aimed at strengthening the company’s technical profile. AI, before gaming. The nuance is important, because talking about GPU can automatically activate the imagination of graphics cards for gaming, but reality goes in another direction. Intel already has a presence in graphics on the PC, with its Arc productsbut the announcement targets GPUs for AI and data centers. The initiative as a still early plan, with a strategy that will be developed based on customer demand, a coherent approach with an AI infrastructure market where the most intense battle is being fought today. Intel’s corporate moment. According to CNBCthe stock market value has risen in the last year in the heat of optimism about your business foundrybut the company is still mainly dedicated to manufacturing chips for its own catalog. It’s no secret that Intel has lost ground to companies driven by the AI ​​data center wave, and is now taking steps to respond. No relief until 2028. In the same forum, Tan slipped in another element that helps dimension the challenge of AI infrastructure. He spoke of the memory chip shortage which is disrupting the market due to the mismatch between supply and demand, driven by the construction of AI-oriented data centers. That environment is giving manufacturers room to continue raising prices, and Tan was blunt in describing AI as the “biggest challenge” to memory. He also released an estimate that leaves little room for optimism: he stated that he does not expect “no relief until 2028.” Images | Brecht Corbeel In Xataka | Goodbye to the duopoly of Intel and AMD in Windows: the arrival of NVIDIA processors is imminent and brings 8 laptops under its arm

That the US authorizes Nvidia’s H200 to reach China is not a concession, but a plan. They prefer money to competition

The chip war between China and the US has mutated from a blockade to a commercial transaction. Donald Trump has announced that he will allow Nvidia export its high-performance H200 chips to China. The authorization carries an unprecedented condition: the US government will receive a 25% commission about these sales. This “reverse tariff” transforms China containment into a source of income, breaking with the strategy of total suffocation and offering a lifeline to Nvidia in its most critical market. End of free blocking. The decision is a direct result of a meeting last week between Trump and Jensen Huang, CEO of Nvidia. The White House’s logic has changed: it argues that this measure is carried out under strict national security conditions, extending the model to competitors such as Intel and AMD. It is a movement that formalizes what was already intuited a few months ago, when Nvidia managed, after a first meeting with Trump, lift veto on bottom H20 chip. At that time, a precedent was already established of transferring 15% of income to the country, a figure that now scales to 25% for the most powerful hardware. Tap on the image to go to the original post A dose for China. That they chose this chip is no coincidence: the H200 is significantly more powerful than the H20—the trimmed model that China had started to boycott— but it is still behind the cutting-edge Blackwell architecturewhich is still banned. According to advisors such as David Sacks, the North American country seeks to keep China addicted to its technology: if they are denied all access, they are forced to look for alternatives of their own. In fact, Huawei has already admitted that it will take two years to match the performance of the H200, making this chip the perfect tool to slow down Chinese development while monetizing its need. Cracks and black market. The reality is that the total blockade was failing. Recent investigations showed how Chinese companies used shortcuts through Indonesia to access the power of banned chips. Furthermore, the second-hand market had become the main avenue for China get H100 and A100 GPUs off the radar. By allowing the sale of the H200, the US is trying to regain control over a flow that already existed, but in the shadows. At the same time, the Department of Justice announced “Operation Gatekeeper” to dismantle smuggling networks in countries like Hong Kong. China’s response. The great unknown is precisely this, the reception of the news in Beijing. Although Trump claims that Xi responded “positively,” the reality on the ground seems different. China has been for months banning your local businesses buy Nvidia chips to promote its domestic industry. The CAC (Cyberspace Administration of China) came to investigate the H20 looking for rear doorssomething that generated a climate of mistrust that not even the previous July agreement managed to completely dissipate. Jensen Huang, who warned about the danger of an “AI silk road” If the US continued to block sales, with this pact it gets a golden opportunity to not lose a market that represents 13% of its income, although its Chinese clients must now pay the price of American geopolitics. Cover image | Composition with images from Nvidia and RawPixel In Xataka | China has just redrawn the map of strategic minerals: its new rules on rare earths target the United States

Google’s TPUs are the first big sign that NVIDIA’s empire is faltering

It was 2013 and Jeff Dean, one of the directors of Google, he realized something along with your team: if each Android user used their new voice search option for three minutes a day, the company would have to double the number of data centers to cope with the computational load. At the time, Google was using standard CPUs and GPUs for this task, but they panicked and realized they needed to create their own chips for those tasks. This is how it was born Google’s first Tensor Processing Unit (TPU)an ASIC specifically designed to run the neural networks that powered its voice services. That grew and grew and in 2015, before the world knew it, those first TPUs accelerated Google Maps, Google Photos and Google Translate. A decade later, Google has created TPUs so powerful that they have almost unintentionally become a surprising and unexpected threat to the almighty NVIDIA. There it is nothing. Blessed panic. Google TPUs keep their promise Until now when an AI company wanted to train its models, turned to advanced NVIDIA chips. That has changed in recent times, and in fact we have seen two recent signs that certainly pose a turning point. Missing from that timeline is the last and most striking member of this family, Ironwood, presented in April 2025. Source: Google. The first is the release of Claude Opus 4.5, an exceptional modelespecially in programming tasks. Those responsible for Anthropic already they explained that this new model does not depend only on NVIDIA, but combines the power of three different proposals: that of NVIDIA, but also Amazon’s Trainium and Google’s TPUs. But it is also that Google has given the bell because your brand new AI model Gemini 3 He has been exclusively trained using the new Ironwood TPUs that were presented in April and have become a real sensation. As we said, Google started that project in 2013 and launched its first TPU in 2015, but that internal need became a blessing, because what Google I couldn’t know is that these TPUs would end up arriving at the right time: the launch of ChatGPT turned them into a fantastic opportunity to strengthen your AI infrastructure, but also to be used for training and inference of your AI models. From there we end up reaching the current Ironwood TPUs, which in their seventh generation are exceptional both in inference as in training (as its use has demonstrated for Gemini 3). Google has managed to squeeze even more out of its chips and has doubled the peak FLOPS per watt compared to its previous generation. Source: Google. The efficiency and power of these chips gives a very notable jump compared to their predecessors, and for example they achieve double FLOPS performance per watt which was achieved with Trillium chips. If we compare them with the TPU v5p of 2023, the chips manage to reach 4,614 TFLOPS, 10 times more than the 459 TFLOPS of those models from two years ago. It’s an extraordinary leap in performance (and efficiency). The key to 2025: Google now lets others use its TPUs But in the evolution of TPUs there is another differentiating element in 2025. This has been the year in which Google has stopped “being selfish” with its TPUs. Before only she could use them, but in recent months she has reached agreements with OpenAI —which also seeks make your own chips— and especially with Anthropic. The performance of Ironwood is already comparable to that of the GB200 and even the GB300 from NVIDIA. Source: SemiAnalysis. That second alliance is especially monumental as part of that outsourcing strategy. Google is not only renting capacity in its cloud, but facilitating the physical sale of hardware. The agreement covers one million TPUs: 400,000 units of its TPUv7 Ironwood sold directly through Broadcom, and 600,000 rented through Google Cloud (GCP). In a deep report in SemiAnalysis It is revealed how from a technical perspective, the TPUv7 Ironwood is a formidable competitor. The performance gap with NVIDIA is closing, and Google’s TPU is practically the same as NVIDIA’s Blackwell chip in FLOPS and memory bandwidth. However, the real advantage lies in the cost. The Total Cost of Ownership (TCO) of an Ironwood server is estimated to be 44% lower for Google than for an NVIDIA GB200 server, allowing the search giant to offer very competitive prices to clients like Anthropic. To help even more in that race, they point out in SemiAnalysis, Google has another ace up its sleeve. This is Google’s Inter-Chip Interconnect (ICI), a network architecture that allows up to 9,216 Ironwood chips to be connected using a 3D torus topology. Google also uses optical circuit switches that allow optical data to be routed without electrical conversion, reducing both latency and power consumption. This allows you to reconfigure the topology of that network on the fly to avoid (or mitigate) failures and optimize different types of parallelism. NVIDIA’s “moat” with CUDA is narrowing We have often repeated that although semiconductor manufacturers already have flashy chips —tell AMD– In fact the true strength from NVIDIA is in CUDAthe software platform that has become the de facto standard for AI developers and researchers. Google also wants to change things here. During the last few years the company tried to focus on Python libraries such as JAX either XLAbut in recent times has started prioritizing native PyTorch support —a great competitor of TensorFlow— in its TPUs. That’s crucial to making it easier for engineers and developers to start migrating to their TPUs instead of NVIDIA GPUs. Before it was possible to use PyTorch on TPUs, but it was cumbersome, as if one had to speak a language using a dictionary in real time, while for NVIDIA GPUs that was the “native” language. With XLA Google used an intermediate library as a translator to be able to use PyTorch, but that was a nightmare for developers. Native support allows Google TPUs to behave just like NVIDIA GPUs in the … Read more

The US vetoed NVIDIA’s most powerful chips in China. I didn’t count on an unexpected problem: Indonesia

NVIDIA is at the center of the technological war between China and the United States. After the blockadethe US allowed the company sell a version of its H20 chips specific for the Chinese market, but the most powerful chips, The Blackwells are still banned in China. Or so we believed. What is happening. Donald Trump made it clear that he does not want China to have access to Blackwell chips, but despite the blockade, an investigation by the Wall Street Journal shows how there are Chinese companies benefiting from the computing power of these chips using legal shortcuts. The process. The investigation details the process that NVIDIA’s Blackwell chips go through until INF Tech, a Shanghai-based startup, uses the computing power. NVIDIA sells its chips to Aivres: Aivres is a Silicon Valley company partially owned by Inspur, a Chinese company that is on the US blacklist. NVIDIA could not do business with Inspur or its partners, but the blockade does not affect partners based in the US, as is the case with Aivres. Aivres sells the chips to Indonesia: specifically to an Indonesian communications provider called Indosat Ooredo Hutchison. The agreement includes the sale of 32 NVIDIA GB200 racks with 72 Blackwell chips each; more than 2,300 chips worth $100 million. Indonesia sells computing power to China: The end customer for this cloud computing power is INF Tech, which will use it to train AI in financial and medical research applications. This point is key as we will see later. Why it is important. The investigation calls into question the true effectiveness of US blockades and regulations. Using intermediaries in other countries, Chinese companies can manage to circumvent the restrictions and access the most powerful chips, all without violating the restrictions. Cracks. According to the Trump administration’s controls, the deal is legal as long as INF Tech does not use the chips to help the government with military intelligence applications or to develop weapons. However, it is difficult to know what it is actually being used for and in fact in the US there are suspicions that The Chinese government is leaning on the private sector to improve its military technology. Disagreement. If there is a crack, the logical thing would be to cover it. The Biden administration tried to tighten these rules to prevent chips from being sold to countries that are not close allies of the United States. This would have prevented the sale to the Indonesian company, but when Trump returned to power he decided not to go ahead with these new rules. Instead of the government controlling it, it should be the companies themselves. Interests. The US blockades seek to take advantage of China in the AI ​​technological race, all for reasons of “national security.” It is contradictory that they leave these cracks open through which these chips end up sneaking in legally. The one who thinks it’s great is NVIDIA. Speaking to the Wall Street Journal, a company spokesperson came out in favor of Trump’s decision, saying that “Biden’s controls cost taxpayers tens of billions, paralyzed innovation and ceded ground to foreign rivals.” Image | NVIDIA, Pexels In Xataka | The Chinese government has taken a definitive step to break NVIDIA’s dominance in China: prioritize “national” chips

Everyone is developing chips that compete with NVIDIA’s. They are in the wrong race

Qualcomm advertisement on Monday that it is working on AI accelerator chips, which means there will be new competition for NVIDIA. The company that dominates the AI ​​hardware landscape is seeing a large group of competitors try to erode that position, but the problem for all of these companies is not the chips, but something else. A CUDA call. what has happened. Qualcomm has announced the AI200 chip, which will begin selling in 2026, and the AI250, which will do so in 2027. Both will be able to work in rack-type systems that have liquid cooling. Qualcomm servers may have up to 72 chips based on the Hexagon NPUs of the company’s Snapdragon SoCs. Inference yes, training no. The company has revealed that its chips focus on inference (the execution of AI models) and not training. Their rack-based systems will have lower operating costs than cloud system providers, Qualcomm says. Each rack consumes 160 kW, a figure comparable to the consumption of some racks based on NVIDIA GPUs. There are no details about the price of these chips, the cards or the racks that will integrate them, nor about how many NPUs can be offered in each rack. What we do know is that Qualcomm’s accelerator cards will support up to 768 GB of memory, more than what NVIDIA or AMD offer in their current models. according to CNBC. Chips for third parties. The other important point is that Qualcomm will sell its AI chips and other components separately, allowing large AI companies to “customize” their own racks based on Qualcomm chips. It is an identical philosophy to the one they have adopted in the world of their mobile SoCs. Investors viewed the news with exceptional optimism, and Qualcomm shares rose 11% in Monday’s session. NVIDIA dominates with an iron fist. In the AI ​​chip segment, the king is NVIDIA. The company is the absolute protagonist of this market and according to CNBC it maintains a 90% market share, which has allowed it to skyrocket its valuation to 4.5 trillion dollars. That dominance could now be threatened by the avalanche of chips that are arriving from various manufacturers. All against NVIDIA. AMD has its excellent Instinct, Google has your TPUsAmazon their TrainiumMicrosoft their Maia and Huawei has your Ascend. All of them make really striking proposals for NVIDIA chips, and little by little these solutions are being integrated into more and more data centers. But the real problem is not in the hardware, but in the software. The great challenge is to defeat CUDA. The de facto standard in the AI ​​industry that developers use It’s CUDAa platform that allows you to take full advantage of the capabilities of NVIDIA chips in the field of artificial intelligence. This hardware+software combination is much more mature than that of its competitors, who have the hardware part resolved (or are on the right track) but do not have a platform comparable to CUDA. AMD has ROCmwhich is especially interesting because it is Open Source, but at the moment its features still do not reach those of CUDA. Reinvent the wheel? CUDA has been on the market for almost two decades, which means that the majority of academic research and pioneering models—such as ImageNet—were written for CUDA. It is not a language, it is a vast collection of libraries, optimized frameworks (like cuDNN), debugging tools and a huge community. Developing a competitor is basically like reinventing the wheel, and migrations are expensive and companies and startups will not have an easy time assuming it. China is also in the fight. And of course, if there is another great protagonist in this race, it is China. The Asian giant, previously dependent on NVIDIA, is seeking to get rid of this manufacturer, and along with the development of advanced AI chips they are also trying to have its own AI software to surpass CUDA. In Xataka | AI is the best thing happening to nuclear fusion. The construction of ITER is already accelerating

Huawei already has his best strategy to end Nvidia’s domain in China

In early 2025 NVIDIA had a fee in the Chinese chip market for artificial intelligence (AI) of nothing less than 95%. However, during the last weeks has dropped to 50%. This abrupt decrease is largely due to the export restrictions of chips for the The US government has imposedalthough it is also caused by the development of competition within China. Despite this unfavorable Nvidia scenario, it has something very important in its favor: CUDA (Compute Unified Device Architecture). Most of the AI projects that are currently being developed are implemented on CUDA. This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs, and replace it with another option in the projects that are already underway it is a problem. Huawei, who aspires to an important portion From this market in China, it has Cann (Compute Architecture for Neural Networks), which is its alternative to CUDA, but for the moment CUDA dominates the market. Huawei is going to position Cann as an open source tool kit This declaration of Li Guojie, a computer scientist from the Chinese Academy of Sciences that is considered an authority in China, Express clearly The important thing that are the tools that I have just mentioned in the ECOsystem of AI models: “China must develop an alternative system to achieve self -sufficiency in AI (…) Deepseek has had an impact on the CUDA ecosystem, but it has not overcome it completely because barriers persist. In the long term we need to establish a set of software tool systems for the controllables that exceed CUDA.” Xu Zhijun does not mention it expressly, but what his strategy pursues is to increase the competitiveness of the Huawei’s ecosystem This is undoubtedly one of the great challenges that China faces in this area, and probably its best option is Cann. During the last five months Huawei has launched two GPU for Ia Very competitive and is about to take a very important step: Cann will position as An open source tool kit. Its purpose is, According to Eric Xu ZhijunRotary President of Huawei, “to accelerate the innovation of developers and get the chips of the Asce Family to be easier to use.” Xu Zhijun does not mention it expressly, but what his strategy pursues in the background is to increase the competitiveness of the Huawei ecosystem attacking Nvidia where he is stronger. In addition, it has already begun to discuss with the main actors of the China’s AI industry, as well as with its commercial partners, universities and research institutions how Huawei should build their open source ecosystem. If this initiative thrives, and presumably will, it will represent a very important step forward on the road to China’s technological independence. Image | Hiilicon More information | SCMP In Xataka | Nvidia has to deal with the absolute distrust of several US legislators. His plan in China is in danger In Xataka | The US wants to end the chips for the Chinese that are sold abroad. And China knows how to defend oneself

Intel pull the towel with the chips for ia. Its general director acknowledges that Nvidia’s advantage is insurmountable

“I think it is too late for us (to achieve the position of Nvidia in the field of artificial intelligence), Although we have other opportunities in this market (…) twenty or thirty years ago we were leaders. Now the world has changed. We are not among the ten main semiconductor companies. We have to be humble. “These words of Lip-Bu Tan, the Director General of Intel, have been collected by The Oregonian And they are aimed at company employees in a clear attempt to expose what challenges they face. The challenges with Those who are dealing Intel They exceed the other challenges that he has faced during his more than half a century of history. The leadership that has sustained for decades in the manufacturing industry of integrated circuits is in the hands of The Taiwanese company TSMC Since the mid -2000s. In addition, the stagnation during the last years of the PC market and the slowness with which Intel has participated in the AI industry have placed it in a very compromised position. In July 2024, the company that at that time led Pat Gelsinger gave a tremendous batacazo in the stock market. Their actions fell 30% in a few days and stabilized in the value they had in 2011. In addition, Intel lost $ 1.6 billion During the second quarter of 2024 and its year -on -year income fell by 1%. These circumstances triggered a crisis that still persists. Today China is a crucial support for Intel. Dram memories will be tomorrow Shortly after his arrival It was leaked that lip-bu so He planned to launch a new cut of the Intel template in a clear attempt to reduce their operating expenses, among which personnel costs or marketing expenses are counted. The figure that the company shuffled on this occasion amounted to 20% of its workforce, which in practice implied to dispense with approximately 20,000 workers. These people join the more than 15,000 employees of which Intel has dispensed with during the last months of 2024. In addition, since June 18 and throughout the month of July they will be forced to leave their jobs Between 8,000 and 10,900 workers Of the factories that this company has spread throughout the planet. However, presumably the most affected plant will be the largest of all: that of Oregon (USA). It is evident that Intel is going through a very difficult stage, although he still has some solid pillars to hold on. One of them is China. During the fiscal year of 2024, 29% of Intel’s turnover came from China, compared to the 24% of the US This Asian country is the largest market in which Intel is present. During the fiscal year of 2024 29% of its turnover He came from Chinacompared to 24% of the US. And is that of the 53,100 million dollars that This company entered Last year no less than 15,400 million arrived from China. These figures reflect very clearly how important the country led by Xi Jinping for Intel is. And also how sensitive it is to the geopolitical context. An important part of the Intel business is supported by the commercialization of relatively old integrated circuits that come from their mature lithography nodes. They are not at all avant -garde semiconductors, but they are still necessary. At the current tension situation between the US and China for this last country These mature integrated circuits are crucial. Chinese chip designers and manufacturers are capable of supply your own market With the mature chips needed by appliances, telecommunications or cars equipment, among other industries. However, many users, research centers and universities in China continue to use software for X86 and X86-64 processors, so at the moment they cannot do without the CPUs designed to execute it. Intel is currently benefiting from this need, although he is preparing another bet. An a priori accurate bet. And this American company has founded the Japanese investment group SoftBank A company specialized in the design and manufacture of memory chips. His name is Saimemory and he was born expressly to compete from you to you with SK Hynix, Samsung and Micron Technology. Your plan consists in developing A new type of dram memory Stacked high from some patents prepared by Intel and several Japanese research centers, among which is the University of Tokyo. Intel and Softbank have proposed to complete the development of a prototype and evaluate its viability from a technical point of view by 2027. Image | Intel More information | The Oregonian In Xataka | Intel has confirmed that the 20A node will be skipped to reduce expenses. The 18A node will enter production in 2025

NVIDIA’s story is that of a survivor when all its competitors disappeared or were bought

In 1998 NVIDIA was on the verge of go bankrupt. The rivalry that graphics chip manufacturers sustained during the 90s of the last century and the first decade of the current one killed many of them. In fact, as Tae Kim explains to us in the highly recommended essay book ‘The NVIDIA Way’only the company led by Jensen Huang survived in a saturated industry that still suffered from obvious immaturity. During the second half of the 1990s, between 80 and 100 companies competed in the PC graphics market, as Kim confirms in his work. Some of them were well known to users, such as Matrox, 3dfx Interactive, S3 Graphics, ATI Technologies, Hercules, Cirrus Logic, Intel, Trident, Number Nine Visual Technology or Rendition, while others were fighting to make their way in a market whose size It was at that time much smaller than today. Tae Kim argues that only NVIDIA has survived for one reason: it is the only company of all those I have mentioned so far in this article that remains as it was at that time. Most of them no longer exist, and those that remain have either been bought by other companies, such as ATI Technologies, or have had an unstable presence in the PC graphics hardware market and have made a living from other businesses, such as Intel. Jensen Huang is where he is thanks to his perseverance and intuition In his book Tae Kim assures that NVIDIA has overcome the critical moments it has faced thanks to Jensen Huang. Many of the decisions this executive has made during his career have been guided by ‘The innovators’ dilemma’one of his favorite books. Its author, American university professor Clayton M. Christensen, maintains that not dedicating the resources necessary for innovation It leaves the way free to do so for other companies that can afford to risk and bet everything on innovation with the purpose of consolidating themselves in the market. Jensen Huang has always been attentive to both talent coming from universities and strengthening his competitors. Christensen’s teachings have inspired Jensen Huang and helped him define NVIDIA’s business strategy, but, according to Kim, the company is still competing today thanks to two qualities of Huang: his perseverance and his intuition. In 1998, TSMC, which already then manufactured NVIDIA chipsran into a production problem. The latter company was running out of money, but Jensen Huang reacted and convinced three of the PC graphics card manufacturers he worked with. “Our technology is good. We will give you a 10% discount on the IPO when we go public. You just have to give us some money now,” Huang promised them. And it worked. His conviction and firm belief in the potential of his products got NVIDIA out of the quagmire, but the recipe for his success has other ingredients that we cannot ignore: his intuition and his good eye when it comes to recruiting talent. Huang has always been attentive to both talent coming from universities and strengthening his competitors. The signing of Dwight Diercks proves it. Jensen Huang followed Scott Sellers closely before he co-founded 3dfx Interactive. When the latter company went bankrupt in 2000 and was bought by NVIDIA, Huang questioned Sellers: “Which engineers are really good among all those who have been part of your team? Who are the stars?“Sellers did not hesitate to praise Dwight Diercks. And he ended up at NVIDIA. Jensen Huang is the alma mater of his company, but it is clear that he is fully aware of how essential the people he works with are. The Nvidia Way: Jensen Huang and the Making of a Tech Giant (English Edition) *Some prices may have changed since the last review Image | NVIDIA Bibliography | ‘The NVIDIA Way’by Tae Kim In Xataka | We can forget about AI without hallucinations for now. NVIDIA CEO explains why

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.