The industry became obsessed with training AI models, while Google prepared its masterstroke: inference chips

In recent years, what was truly relevant was training AI models to make them better. Now that they have matured and training it no longer scales as noticeablywhat matters most is inference: that when we use AI chatbots they work quickly and efficiently. Google realized this change in focus, and has chips precisely prepared for it. Ironwood. This is the name of the new chips from Google’s famous family of Tensor Processing Units (TPUs). The company, which began developing them in 2015 and launched the first ones in 2018now obtains especially interesting fruits from all that effort: some really promising chips not for training AI models, but for us to use them faster and more efficiently than ever. Inference, inference, inference. These “TPUv7” will be available in the coming weeks and can be used to train AI models, but they are especially aimed at “serving” these models to users so that they can use them. It is the other big leg of AI chips, the really visible one: one thing is to train the models and quite another to “execute” them so that they respond to user requests. Efficiency and power by flag. The advance in the performance of these AI chips is enormous, at least according to Google. The company claims that Ironwood offers four times the performance of the previous generation in both training and inference, and is “the most powerful and energy-efficient custom silicon to date.” Google has already reached an agreement with Anthropic so that the latter has access up to one million TPUs to run Claude and serve it to its users. Google’s AI supercomputersand. These chips are the key components of the so-called AI Hypercomputer, an integrated supercomputing system that according to Google allows customers to reduce IT costs by 28% and a ROI of 353% in three years. Or what is the same: they promise that if you use these chips, the return on investment will be multiplied by more than four in that period. Almost 10,000 interconnected chips. The new Ironwoods are also equipped with the ability to be part of joining forces in a big way. It is possible to combine up to 9,216 of them in a single node or pod, which theoretically makes the bottlenecks of the most demanding models disappear. The size of this type of cluster is enormous, and allows for up to 1.77 Petabytes of shared HBM memory while these chips communicate with a bandwidth of 9.6 Tbps thanks to the so-called Inter-Chip Interconnect (ICI). More FLOPS than anyone. The company also claims that an “Ironwood pod” (a cluster with those 9,216 Ironwood TPUs) offers 118x more ExaFLOPS FP8 than its best competitor. FLOPS measure how many floating-point math operations these chips can solve per second, ensuring that basically any AI workload is going to run in record times. NVIDIA has more and more competition (and that’s a good thing). Google chips are a demonstration of the clear vocation of companies to avoid too many dependencies on third parties. Google has all the ingredients to do it, and its TPUv7 is proof of this. It’s not the only oneand many other AI companies have long sought to create their own chips. NVIDIA’s dominance remains clearbut the company has a small problem. In inference CUDA is no longer so vital. Once the AI ​​model has been trained, inference operates under different game rules than training. CUDA support remains a relevant factorbut its importance in inference is much less. Inference focuses on obtaining the fastest possible answer. Here the models are “compiled” and can run optimally on the target hardware. This may cause NVIDIA to lose relevance to alternatives like Google. In Xataka | When you’re OpenAI and you can’t buy enough GPUs, the solution is obvious: make your own

The secret of Chinese AI companies to compete without Nvidia chips: electricity subsidized by Beijing

Everywhere we look, there is artificial intelligence. Everyone talks about it, but what is its fuel? It’s not the data or the chips: it’s the electricity. While in the West technology companies are looking for how to power their data centers —increasingly energy hungry—, China has decided to take a different step. Beijing has designed an energy subsidy for its technology sector with a clear objective: to make the energy that powers the digital brains of its next generation of chips cheaper. Energy subsidy. Since September, the Chinese Government banned large national technology companies —including Alibaba, ByteDance and Tencent—acquire artificial intelligence chips from the American Nvidia, in an attempt to strengthen local production. However, the consequence was immediate: national processors consume more electricity. According to The Chosun Dailygenerating the same number of tokens with Chinese chips requires 30% to 50% more energy than with Nvidia’s H20, which sent electricity bills skyrocketing and led companies to complain to regulators. To make up for that gap, local governments introduced grants that cover up to a full year of operating costs, according to the Hong Kong media on.cc. In those provinces, industrial electricity was already 30% cheaper than in the developed coastal areas of the east, but with the new incentives the price could fall to 0.4 yuan per kilowatt-hour, a record figure for the Chinese technology industry. ¿How does the energy plan work? The scheme is relatively simple, but strategic. Local governments offer electricity discounts of up to half to data centers that use chips produced within the country. Operators that use foreign processors – such as those from Nvidia or AMD – are excluded from the program. In addition, the energy provinces receive direct support from the State to finance the discounts, with the aim of reducing dependence on technological imports and compensating for the increased consumption of local chips. According to the Financial TimesChinese data centers that rely on domestic semiconductors are, for now, less energy efficient, but the subsidy seeks to bring their costs in line with those of more advanced foreign chips. These regions—Guizhou, Gansu, and Inner Mongolia—have become hotbeds for data center clusters, thanks to their abundance of hydropower and coal. There, companies like Alibaba or Tencent are building new facilities to house their generative AI models, taking advantage of lower energy costs and tax incentives. This policy combines three strategic priorities: making energy cheaper, promoting domestic chips and reinforcing technological sovereignty. In a context of United States restrictions, each subsidized kilowatt is also a political statement. An industrial policy with a geopolitical charge. Behind the energy plan is a long-range political commitment. The Chinese Government intends for its technology companies to progressively replace imported chips with domestic processors, even if this implies higher costs in the short term. The electricity subsidy acts as a temporary bridge for national giants to adopt local chips without losing competitiveness. This measure is included in a broader national strategy of technological self-sufficiency. As the Financial Times explains in its series The State of AIChina is using its “society-wide mobilization capacity” to accelerate the development of artificial intelligence. The country already leads the number of patents and scientific publications in AI, and although the United States maintains an advantage in chips and talent, the gap narrows every year. Analyst Dan Wang, quoted by the same media, points out: “China has achieved a unique balance between engineering capacity, state control and massive industrial deployment, allowing it to advance faster than other countries in the practical application of AI.” Meanwhile, in the West… China’s decision contrasts with the energy challenges of the United States. Microsoft CEO Satya Nadella warned that the real bottleneck of AI It is no longer the chips, but the energy. In fact, he explained that many companies accumulate chips that they cannot connect due to lack of power supply. Both Microsoft and Google are already studying building modular nuclear reactors to power their future data centers, a sign of the enormous energy consumption that artificial intelligence requires. While Silicon Valley seeks electricity, China subsidizes it. This asymmetry reflects two different models: one guided by state intervention and the other by market competition. Both pursue the same goal—sustaining the artificial intelligence revolution—but with opposite philosophies. A future plugged into the State. The Chinese subsidy not only alleviates costs: it redefines the relationship between the State and the private sector in the age of AI. As analyst Arnaud Bertrand observed, US restrictions pushed China towards a different model: more efficient, more open and more collective. “By operating under hardware limitations, Chinese companies have learned to optimize resources and share open models like Qwen or DeepSeek,” wrote Bertrand on the social network That strategy, based on efficiency and diffusion, could give China a long-term advantage in global adoption, since any company in the world can download and adapt its models. The country that controls the plug. China isn’t just making the chips that power its artificial intelligence. It is also building the electrical grid that makes them possible. In a world where data is the new oil, Beijing has decided to subsidize the fuel of the digital brain. While the West debates how to connect its supercomputers, China plugs them in at a reduced price. And in this race, whoever controls the plug could end up controlling the future. Image | FreePik and FreePik Xataka | The world of AI has a problem: there is no energy for so many chips

there is no power for so many chips

Microsoft CEO Satya Nadella recently participated in an interview and in it Nadella explained that the real problem that the AI ​​segment has is not that there is excessive production of chips, but that we do not have enough energy to power all of them. It is confirmation of something that we have been seeing coming for a long time. Too many chips for so little power. Both Nadella and Sam Altman, the CEO of OpenAI, participated in the interview. During it, the Microsoft CEO explained that “the biggest problem we have now is not excess computing capacity, but energy. It’s something like the ability to build (data centers) close enough to energy sources.” Chips in the drawer. Nadella went on to highlight that “if you can’t do something like that (supply enough power), you’re going to have a bunch of chips sitting around in inventory that you can’t plug in. In fact, that’s my problem right now: It’s not that I don’t have a sufficient supply of chips: it’s actually the fact that I don’t have places to plug them in.” A problem that was seen coming. Microsoft, like other companies that have opted for this segment, have been trying to prepare for this energy demand for some time. Two years ago, in autumn 2023, they were already looking for experts to lead its nuclear program. The objective: bet on the new SMR reactors which could be a good solution to power future data centers. Google was clear about exactly the same thing a year later, and reached an agreement with Kairos Power to build seven of those reactors from now to 2030. I stew it, I eat it. Large technology companies that are dedicating billions of dollars to creating new data centers in the US have discovered that the current electrical grid may be insufficient for their needs. Their solution is to build their own plantssomething they hope can balance the demand and consumption imposed by these gigantic computing factories in which tens of thousands of AI accelerator GPUs work to serve current (and future) users of AI functions. Growing needs. A report from the International Energy Agency (IEA) estimated that in 2022 between 240 and 340 TWh of energy will be used for data centers (excluding cryptocurrencies). This represents an increase of between 20 and 70% compared to 2015 consumption. Already in April 2024, that same organization warned that several countries will multiply this consumption significantly. Triple energy? ARM CEO Rene Haas then pointed out that energy needs would triplebut he probably couldn’t know how events would develop. Since then, AI companies have announced mammoth projects —with Stargate at the helm—and they will dedicate huge amounts of money in an uncertain bet: that AI will be the great revolution that will drive our daily lives. In Xataka | NVIDIA and OpenAI have just made a masterstroke. One that strengthens them and weakens everyone else

Everyone is developing chips that compete with NVIDIA’s. They are in the wrong race

Qualcomm advertisement on Monday that it is working on AI accelerator chips, which means there will be new competition for NVIDIA. The company that dominates the AI ​​hardware landscape is seeing a large group of competitors try to erode that position, but the problem for all of these companies is not the chips, but something else. A CUDA call. what has happened. Qualcomm has announced the AI200 chip, which will begin selling in 2026, and the AI250, which will do so in 2027. Both will be able to work in rack-type systems that have liquid cooling. Qualcomm servers may have up to 72 chips based on the Hexagon NPUs of the company’s Snapdragon SoCs. Inference yes, training no. The company has revealed that its chips focus on inference (the execution of AI models) and not training. Their rack-based systems will have lower operating costs than cloud system providers, Qualcomm says. Each rack consumes 160 kW, a figure comparable to the consumption of some racks based on NVIDIA GPUs. There are no details about the price of these chips, the cards or the racks that will integrate them, nor about how many NPUs can be offered in each rack. What we do know is that Qualcomm’s accelerator cards will support up to 768 GB of memory, more than what NVIDIA or AMD offer in their current models. according to CNBC. Chips for third parties. The other important point is that Qualcomm will sell its AI chips and other components separately, allowing large AI companies to “customize” their own racks based on Qualcomm chips. It is an identical philosophy to the one they have adopted in the world of their mobile SoCs. Investors viewed the news with exceptional optimism, and Qualcomm shares rose 11% in Monday’s session. NVIDIA dominates with an iron fist. In the AI ​​chip segment, the king is NVIDIA. The company is the absolute protagonist of this market and according to CNBC it maintains a 90% market share, which has allowed it to skyrocket its valuation to 4.5 trillion dollars. That dominance could now be threatened by the avalanche of chips that are arriving from various manufacturers. All against NVIDIA. AMD has its excellent Instinct, Google has your TPUsAmazon their TrainiumMicrosoft their Maia and Huawei has your Ascend. All of them make really striking proposals for NVIDIA chips, and little by little these solutions are being integrated into more and more data centers. But the real problem is not in the hardware, but in the software. The great challenge is to defeat CUDA. The de facto standard in the AI ​​industry that developers use It’s CUDAa platform that allows you to take full advantage of the capabilities of NVIDIA chips in the field of artificial intelligence. This hardware+software combination is much more mature than that of its competitors, who have the hardware part resolved (or are on the right track) but do not have a platform comparable to CUDA. AMD has ROCmwhich is especially interesting because it is Open Source, but at the moment its features still do not reach those of CUDA. Reinvent the wheel? CUDA has been on the market for almost two decades, which means that the majority of academic research and pioneering models—such as ImageNet—were written for CUDA. It is not a language, it is a vast collection of libraries, optimized frameworks (like cuDNN), debugging tools and a huge community. Developing a competitor is basically like reinventing the wheel, and migrations are expensive and companies and startups will not have an easy time assuming it. China is also in the fight. And of course, if there is another great protagonist in this race, it is China. The Asian giant, previously dependent on NVIDIA, is seeking to get rid of this manufacturer, and along with the development of advanced AI chips they are also trying to have its own AI software to surpass CUDA. In Xataka | AI is the best thing happening to nuclear fusion. The construction of ITER is already accelerating

they use Huawei and DeepSeek chips

China’s race to get become technologically independent from the United States It is reaching the military sector. The military is accelerating the integration of artificial intelligence into its operations and most importantly: they are favoring national technologies. In the software, DeepSeek. In hardware, Huawei chips. what’s happening. the chinese army is using AI to support strategic decision making and target detection. According to an analysis of Reutersseveral studies and patents suggest that they are also applying it in new vehicles such as robot dogs and autonomous drones, all prioritizing the use of national technologies, both in software and hardware. Why is it important. China has already given steps to stop depending on Nvidiathe maker of the most powerful AI chips. This is one more step towards technological independence, but in a critical sector such as the military. The objective is to eliminate foreign influence in its defense infrastructure, just like the United States does. Huawei chips. Speaking to Reuters, the defense policy expert Sunny Cheungassures that since the beginning of this year the Chinese military has increased the number of contractors that exclusively use national hardware. That is to say, AI chips made by Huawei. Although the military still uses Nvidia chips (it is not known if they were imported before or after of the blockade), there is a movement towards the use of own chips. DeepSeek. At the beginning of the year, military experts in China assured that the military was testing DeepSeek integration. In May, researchers from Xi’an University showed a system based on DeepSeek capable of creating and analyzing 10,000 combat scenarios in just 48 seconds. Reuters analyzed several tenders awarded to various companies by the Chinese military and at least a dozen mentioned DeepSeek, while only one referenced Alibaba’s Qwen. It is clear which is the preferred model for the Chinese army. Robot dogs and drones. The documents analyzed by Reuters also suggest that the Chinese military is integrating AI into autonomous vehicles such as robot dogs. It is no secret, in 2024 the army itself published a video promoting robot dogs who moved in packs to eliminate explosives and other threats. The robots in the video were from the Chinese company Unitree, but there are also other national companies dedicated to the manufacturing of these vehicles such as Norinco, which confirmed in a technical report that they use Huawei chips. On the other hand, Deepseek is also being integrated into drones to give them the ability to recognize and follow targets with hardly any human intervention. Image | Wikipedia, Flickr In Xataka | Europe already has the future of war drones within its reach. And it is offered by a country accustomed to them: Israel

In its race to make advanced chips, China has tried to copy ASML. It’s going wrong

China continues to make extraordinary progress when it comes to manufacturing its own advanced chips, but it still has a big problem: it does not currently have manufacturing equipment. extreme ultraviolet photolithography (UVE) own. Of course is working in the development of this technology, and one of the strategies it is following to overcome this challenge is unique… and almost obvious. Reverse engineering. In his 2010 book ‘Copycats’ Professor Oded Shenkar argued that it is often the case that imitators end up triumphing over innovators. Although in the West the view is the opposite, in China there is a positive view of copying and reverse engineering processes are an important tool to copy technologies. That is what the country has supposedly tried, as indicated in The National Interest (TNI). From producing for the world to producing for themselves. Already we review the conclusions from the book ‘Apple in China’, which is a perfect example of how by delegating production to China, Western companies have ended up contributing to the country’s development and its specialization. The trade war has logically made China now seek its independence in the face of the vetoes it is suffering from developing its own technological solutions. From UVP to UVE. There has already been significant progress in this area, and recently we counted as a Chinese manufacturer already has a prototype of a UVP machine (deep ultraviolet) for the creation of relatively advanced chips. If there is a crucial challenge to be able to create these even more advanced chips, it is power. have UVE photolithography machinesbut having that first problem solved is important to make the leap to EUV technology. And this is where something unique has been discovered. Let’s see how it works inside. As revealed in TNI, it has been revealed that China has been “caught” trying to reverse engineer a machine ASML UVP Photolithography. Not so much to mass produce these machines, sources indicate, but because Chinese technicians are trying to learn how they work in order to replicate them and, from them, develop more advanced machines and chips. It’s not broken just because. However, it seems that when disassembling one of these ASML systems, Chinese technicians damaged it. That made them notify the official ASML technicians to solve the problem. When they arrived, they discovered that the machine had not simply broken, but that the Chinese had tried to dismantle it and then reassemble it. ASML’s de facto monopoly. ASML’s UVE photolithography machines are considered the most complex and advanced in the world, and the truth is that today the Dutch company has a de facto monopoly with such systems. It is these machines that allow access to the production of the most advanced chips – such as those used in NVIDIA’s modern AI accelerators – and have become the true bottleneck of the semiconductor industry. Beyond the damaged machine. The incident reveals two crucial points. The first, Beijing’s extreme urgency to be able to control chip production from start to finish. The second is that the challenge of creating these machines goes beyond mere hardware copying: lithography systems require extraordinary technical mastery of components such as precision optics or materials science. Too many obstacles? China may have brilliant engineers, but ASML machines also have a highly specialized supply chain which undoubtedly makes it difficult for such a machine to be built entirely in China. A good example is Zeiss SMTthe German company that supplies the ultra-precision optical systems and mirrors needed for UVE and advanced UVP photolithography systems. A long way to go. This supposed problem reveals the difficulties that China is going through in order to have machines with advanced photolithographic technologies. At Nikkei Asia They were already talking in July about how complex it is to achieve a “Chinese ASML.” In this analysis they cited Didier Scemama, director of hardware research at BofA Global Research, who estimated that China still has years to achieve something like this. “It may take 5, 10, 15 years, we don’t know. Will it be competitive with what ASML does? It’s highly unlikely, but it will be good enough for China.” Image | Zeiss In Xataka | Holland has just declared war on China in the most important battle of the century: control of semiconductors

There is a mystery customer spending 10 billion on Broadcom chips. Nobody knows who he is and that should worry us

Charlie Kawwas, president of semiconductors at Broadcom, confirmed yesterday that OpenAI is not the mysterious client who signed up to pay $10 billion in custom chips. In September the existence of that enigmatic client became known and there was unanimity assuming that it would be OpenAI. But it turns out it’s not OpenAI. “I would love to receive a purchase order for 10 billion from my good friend Greg,” Kawwas said. referring to Greg Brockman, president of OpenAI. “He hasn’t given it to me yet.” Why is it important. During the Cold War, nuclear installations could be counted from satellites. In the AI ​​race, someone may be building the computational equivalent of a nuclear arsenal and we have no way of knowing. AI chips are the new strategic weapons. And unlike enriched uranium, they travel discreetly in commercial containers. An entity with $10 billion to spend on custom semiconductors is building AI capability on a beastly scale. The candidates. The analysis rules out the usual suspects: Meta and Google They are already known Broadcom customers. amazon has its own chip strategy with AWS. Microsoft invest through your partner-friend-enemy OpenAI. More disturbing options remain: Gulf sovereign wealth funds with technological ambitions. Government entities Americans (NSA, classified projects). Chinese actors operating through intermediaries. Apple preparing a major play in AI. This last option would be the canary in the mine to anticipate Apple’s total immersion in AI, but the parakeet Gurman has not anticipated anything, so it sounds like a very remote option. The money trail. Broadcom does not announce the arrival of these types of customers by chance. In September, CEO Hock Tan mentioned this $10 billion order because it completely changed the company’s revenue projections for 2025. Broadcom shares are up more than 53% so far this year. And in 2024 they will already double their value. The market always values ​​these secret contracts even if it does not know who signs the check. In perspective. Opacity in AI infrastructure investments has become the norm. Companies treat their component strategies as classified information. OpenAI just announced 33 gigawatts of computing capacity between agreements with NVIDIA, AMD and Broadcom. One gigawatt can cost $50 billion. The figures are stratospheric, but at least we know who signs them. The alarm signal. When $10 billion in critical technology changes hands without identification, we have a problem because computational training capacity, in the age of AI, is geopolitical power. This case is also a message about the immediate future: the next technological revolution may be developing outside of any public scrutiny. Featured image | Xataka In Xataka | Broadcom is the other NVIDIA: it enters the select group of billion dollars and does not stop growing thanks to AI

Openai signs with Samsung and SK Hynix for a potential chips demand of 900,000 wafers per month. It is an absurd figure

In Seoul A package of agreements was closed which reflects how far the career for artificial intelligence is coming. Openai sat down with Samsung and SK to advance his project Stargate And the companies pointed to a goal that surprises on its own: 900,000 DRAM wafers per month. The plan, according to the parties, goes through reinforcing memory production and studying new data centers in South Korea. All this was announced after a series of meetings of Sam Altman, business leaders and President Lee Jae-Myung himself. The appointment at the Seoul presidential office brought together Sam Altman With the leaders of the aforementioned Asian technological conglomerates, in the presence of the president Lee Jae-Myung. The tone was shared: Korea seeks to consolidate as one of the three global powers in artificial intelligence and OpenAi needs to anchor its Stargate project in regions with technological muscle. This lace explains the interest of both parties in formalizing agreements that cover from the memory supply to the construction of new data centers, with a long -term view. An objective that can tension the entire memory sector The volume that has been put on the table is disproportionate if compared to the market. According to Techinsightsthe global capacity of production of 300 millimeter DRAM was about 2.07 million per month in 2024 and would grow to 2.25 million in 2025. reaching 900,000 would mean about 39% of all that capacity. No individual manufacturer reaches such a figure alone, so that the magnitude of the agreement reflects both Openai’s ambition and the growing pressure to ensure the supply of advanced memory. Signed documents include preliminary commitments to expand memory production and evaluate additional infrastructure in South Korea. Among them is the participation of Samsung SDS in the development of data centers, as well as Samsung C&T and Samsung Heavy Industries in its design and construction. The Ministry of Science and ICT contemplates evaluating site outside the Metropolitan Area of ​​Seoul, and SK Telecom has signed an agreement to study the viability of a center in the southwest of the country. It is also proposed to explore the deployment of Chatgpt Enterprise and API capabilities in corporate operations. A key point in all this is in the difference between using and training a model. When someone consults a chatbot, infrastructure of inference is activated, much less demanding. But to train a new generation system, thousands of chips are needed working in parallel, each accompanied by High performance memory modules. This scale multiplies the need for servers, cooling systems and electrical power. In that context, guaranteeing hundreds of thousands of wafers per month does not seem an excess, but a way of ensuring that the next wave of models has the necessary material support. Stargate Data Center in the United States Openai’s computing muscle relies on huge draft alliances. With Oracle and SoftBankthe company prepares five data centers that would provide several capacity gigawatts. Nvidia, meanwhile, has announced that it would invest up to 100,000 million dollars and that would give access to more than 10 gigawatts through their training systems. Openai’s trajectory is not understood without Microsoft, his first great partner. The Initial bet of 1 billion in 2019 and the subsequent investment of 10,000 million gave access to the Azure cloud, Key to train models They promoted Chatgpt. Over time, however, Sam Altman’s company has begun to reduce that dependence. The last movements mark a change of course towards infrastructure in which OpenAI has more direct control, a way of making sure they are not conditioned to a single supplier. It should be remembered that many of the ads remain preliminary. Letters of intention and memoranda mark the will to advance, But concrete details have not yet closed. At the scale that Stargate raises, the risks are evident: from bottlenecks in the production of high performance memory to energy availability to feed facilities of several gigawatts. To this are added the necessary permits and the complexity of coordinating projects with so many actors. At the moment, the signed opens a path, but it remains to be seen what materializes and in what deadlines. Images | Sam Altman | Samsung | SK Hynix | Xataka with Grok In Xataka | I’ve been hooked to Sora 2 for two days: I’m generating absurd memes where I am the protagonist and I can’t stop

CUDA is the standard that grips the world and Nvidia is the only company with chips capable of running it. Until now

Goal will acquire rivos, a Californian startup specialized in the design of chips based on RISC-Vaccording to sources of Bloomberg. In addition to the capabilities of its chips, the operation is part of a broader strategy: free itself from the NVIDIA dependence and thus take control of its infrastructure for artificial intelligence without its chips. What is at stake. Throughout these last years, Nvidia has dominated the GPUS market For the thanks to CUDAits owner development platform that has become the de facto standard to train and execute artificial intelligence models. Today, we have reached the point that whoever wants to make a large scale needs Nvidia chips, and that gives the company a huge market power, since they put the necessary hardware for an industry in which everyone wants to enter. Goal, despite having some of the best open models in the sector with Callskeep spending billions annually in Nvidia hardware. The strategic movement. With rivos, goal not only buys a company, buy an alternative to the current technological stack. The startup Develop GPUS and RISC-V-based acceleratorsan open source architecture standard that threatens the traditional X86 (Intel and AMD) and ARM. Goal already works in its own internal chip, the goal Training and Inference Accelerator (Mtia), designed next to Broadcom and manufactured by TSMC, but the advances are not as fast as Zuckerberg would like. According to sources cited by Bloombergthe CEO would have been actively looking for market reinforcements to accelerate development. It is not the only one. Goal adds to a career in which their technological rivals already have an advantage. Google has His tpusAmazon has Trainium and Microsoft has developed Maia. The AI ​​war does not win only with the best models, but also With the chip that executes them And goal, despite being burning hundreds of billions of dollars in AI, it was staying behind in this front. The context. Rivas acquisition is not an isolated movement. Target there was already tried to buy furiosaaia South Korean startup specialized in chips to train AI systems, but the offer of 800 million dollars was rejected. In addition, the company has recently announced An investment of 29,000 million dollars To build a huge data center in Louisiana and plan to spend up to 72,000 million this year on infrastructure related to AI. The RISC-V challenge. Rivas represents an ambitious bet. Although RISC-V has not yet managed to penetrate massively into US data centers (its presence is mainly limited to microcontrollers and IoT devices), its potential is undeniable. China is already launching tablets and laptops with this architecture. If Meta manages to develop an AI accelerator based on RISC-V capable of replacing The NVIDIA H200 In its internal operations, it would be a considerable blow for the dominant standard. Cover image | Nvidia and Goal In Xataka | Openai has just presented Sora 2 with a Tiktok -style app. This is outlined a new wave of viral videos

Intel has been manufacturing chips for decades only for her. His only salvation is to make chips for all others

Let’s make a trip to the past. The year is 1997 and Steve Jobs has just returned to Applebut the state of the company is terrible and its future, uncertain. To try to save her Apple began to look for strategic alliances, and that was when she announced an absolutely unusual with Microsoft. Bill Gates’s company would invest 150 million dollars in Apple And both would collaborate on several fronts. That unique agreement seemed impossible. Both companies were large rivals, but the truth is that both won with that alliance. Now it seems that we could live an analogous situation with two other companies that are also large rivals. On the one hand Intel, which is as low as Apple was in 1997. On the other TSMC, which dominates in the semiconductor market like Microsoft did it in the software then. According to The Wall Street Journalboth companies are negotiating a possible alliance that is certainly surprising, but has very interesting ramifications. If TSMC helps Intel’s “salvation”, That will give you an advantageous position in future agreements with the US government. This government is now the owner of 10% of Intel’s shares, and for better or worse to get along with Intel, it means getting along with the administration. Taking into account the current policy that practically forces to manufacture chips and components in the US to get rid of tariffs, that potential alliance becomes profitable. Not just that. The agreement also favors TSMC interests when avoiding possible antitrust. How is it going to be a monopoly when you are helping a competitor not go to pique? As They demonstrated Apple and Microsoft, eliminating competition is not the only way to win the game. A promising transition The Historical crisis For which Intel has been going through his new CEO, Lip-Bu Tan, to make very difficult decisions. The mass layoffs They are part of that strategy, but the company has also attended a deep restructuring that It is “chopping”. But there is even more. In fact, Intel’s strategy seems to be recognized and Accept the failure of the era of “exclusive chip”. The firm has admitted that manufacturing by and for it had no route, and now wants to focus on a business model on which it is A chips factory for third parties. That is exactly what has placed TSMC where it is. If the alliance with TSMC is completed, a unique strategy would be confirmed by Intel in which in a few weeks we have lived a unique opening to alliances of all kinds and condition: SoftBank injected 2,000 million dollars USA bought 10% of Intel for 8,900 million dollars Nvidia invested 5,000 million dollars Apple is a candidate for a collaboration agreement And now TSMC could also follow those steps All these steps certainly open an escape for an Intel that seemed to be against the strings. If such alliances fruit, Intel will only lack his two great future objectives. The first, fulfill your promises With the 14th node to which everything has opted. The second, Get customers For that node. And that is where those agreements can be very useful. Image | Intel In Xataka | Intel has confirmed that the 20A node will be skipped to reduce expenses. The 18A node will enter production in 2025

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.