A Singapore company has purchased 136,000 AI GPUs from NVIDIA. What is not clear is what he has done with them.

In the last three years, an unknown Singapore company has become the largest buyer of NVIDIA chips in Southeast Asia. This singular activity has caused alarms to go off, especially now that the trade war between the US and China means that the “illegal trafficking” of these components is extremely monitored. The suspicion. The company, called Megaspeed, is being investigated by the US government. The objective is to find out exactly if there are ties that unite this company with the Chinese government and if the NVIDIA chips that the company has purchased have ended up in China despite the veto and prohibition that said cards can end up there. The Singapore government is also checking whether Megaspeed has violated local laws, they say. on Bloomberg. Megaspeed denies the major. In a statement sent by mail to that newspaper, those responsible for Megaspeed declare that the company “is based in Singapore and operates fully in accordance with applicable laws, including United States export control regulations.” At the moment there is no evidence. An NVIDIA spokesperson indicates that its request for information from Megaspeed shows no evidence that there was a violation of the terms of those transactions. In their visits to Megaspeed’s data centers they confirmed that “the GPUs are where they are supposed to be.” Furthermore, according to its data, Megaspeed has owners and operates entirely outside of China, and there is no Chinese shareholder. But it does serve Chinese tech giants. Megaspeed has a “neocloud”, cloud infrastructure dedicated to offering computing capacity for AI projects. It has several data centers in Southeast Asia, and the company rents NVIDIA chips to Alibaba. This is an option that the US government does continue to allow: no buying chips, but access to those from suppliers from “non-vetoed” countries. Delicate situation. The question is whether Megaspeed has really done things right or whether it has ended up serving as an intermediary for NVIDIA chips to end up in Chinese technology companies. It would also be disturbing if in the end Megaspeed did have ties to companies or the Chinese government. This discovery comes just as President Donald Trump has stated that he would approve the sale of certain NVIDIA chips to China, something that until now was prohibited. Confusing data. Although Bloomberg admits that they have found no evidence that Megaspeed’s NVIDIA chips have ended up being sent to China, doubts remain. They have analyzed documents with records of commercial transactions, appointments and job offers from both Megaspeed and some of its collaborating companies, and have detected “inconsistencies” between the inventory of chips and those that should really be installed in their data centers. Megaspeed has thousands of NVIDIA GPUs. And the problem is that this company has a huge number of company chips. Since it was founded in 2023 and until November 2025, Megaspeed has imported at least 136,000 NVIDIA GPUs according to Malaysian and Indonesian customs records. More than half are Blackwell chips, which Trump said I would not approve of them being exported to China. Most of those newer GPUs were purchased six months ago, but NVIDIA employees who visited the data centers did not definitively clarify that those that were exported actually ended up where they were supposed to be. The suspicion: a mysterious data center in China. On the Megaspeed website it says that they have three data centers in Malaysia and Indonesia. There is also mention of a room under construction in an unspecified “specific area.” The problem is that Megaspeed showed an image of a render with a data center in Shanghai financed in part by Megaspeed’s original parent company, a Chinese company. Not only that: Megaspeed has a kind of corporate twin in China with an identical website that shows that in reality the employees of the Singapore company are its employees. All of this raises clear questions that remain unresolved and that raise even more suspicions. In Xataka | The US believed it had dealt a mortal blow to China when it deprived it of NVIDIA. He only accelerated one plan: ‘Delete America’

Huawei is not the only one seeking to challenge Nvidia. There are four other “little dragons” knocking on the door

“AI” may be one of the words of the year, but “funding round” is a concept that wouldn’t be far behind in the competition. The unicorn is a OpenAI that, if in 2024 it prepared for exceed 100 billion dollarstoday It is bigger than Coca-Cola or Samsung. He has achieved it thanks to money injected by third partiesand Chinese companies want to follow the same strategy as American companies with only one goal in mind: erase the United States from the equation. It’s the ‘Delete A’ plan. Biren. Talking about Chinese artificial intelligence is talking about deepseek and a few other models, but above all hardware companies like Huawei. Their GPUs are the ones that are helping for the Chinese AI field to flourish, and within those GPU companies is Shanghai Biren Technology. As we read in SCMPhas begun a financing round that seeks to raise more than 620 million dollars. Founded by Nvidia and Alibaba veterans, Biren has to his credit BR100one of China’s promises of raw performance to power the demanding data centers needed to train the artificial intelligence. And, unlike others that have opted for Chinese markets, Biren has chosen Hong Kong to attract international capital more easily. They are not the only ones in this race. Moore Threads. If Biren has Nvidia veterans on his team, Moore Threads is directly led by Zhang Jianzhongwho headed Nvidia in China. Perhaps, it is China’s most accurate response to Nvidia itself, and the reason is that it seeks replicate Jensen Huang’s business model combining 3D graphics, for a growing Chinese ecosystem of gamers, and GPUs for AI. To their credit they have the recent architecture Huaganga series that promises 50% more computing density compared to the company’s previous generation of chips, while being ten times more energy efficient. That efficiency is key to keeping AI operating costs at bay, something of vital importance for a China focusing on cheaper artificial intelligencebut functional as soon as possible. And saying that it is Nvidia’s great Chinese rival is not shooting with blank bullets. On the one hand, they are Huashan chips focused on massive clusters of up to 10,000 cards to train LLMs. On the other hand, the chips Lushan that feature hardware ray tracing for the video game market. New Moore Threads GPUs support major gaming APIs little dragons. When Moore Threads debuted on the Shanghai stock market earlier this month, Its shares skyrocketed 500% on the first day, demonstrating that the Chinese market wants to have “its Nvidia”. Biren and Moore Threads are two of the legs of the table. The other two are MetaX (formed by former members of AMD and focused on computing power) and Enflame (a company backed by Tencent and who develop AI systems in the Cloud for Tencent itself). Are known as the “four little dragons of AI” (although other startups are known the same), four of the most promising GPU startups in China that, together with Huawei that has taken giant steps with its AScend 910Dthey have only one objective. “Delete A“Delete the United States. In 2022, when it was still recent the veto of Huawei by the United States in it escalation of the trade war between the US and China, China’s State Assets Supervision and Administration Commission launched Document 79. It was an initiative to encourage the creation of technology that would turn its hardware companies into heavyweights in the global industry. However, there was something else. According to Wall Street Journalthis document has an unusual level of secrecy and an underlying idea: delete United States. Hence the ‘Delete A’ or ‘Delete America’. As? Making all state-owned companies operating in strategic sectors (such as finance, telecommunications, defense or energy) replace foreign software and hardware with domestic alternatives. When? Before of 2027. To do this, national options must be given, and hence the boost to Huawei and startups like these “little dragons.” Although it has also given headaches to companies that have not been able to access Nvidia chips such as Nvidia H20 because they must opt ​​for native solutions, less powerful or optimized in some aspects. Chinese sovereignty. And this development is not just a whim of China, but a necessity. Huawei, Enflame, Moore Threads and Biren, among many others, are on the Entity List of the US Department of Commerce. This prohibits trading with Western companies and access that foreign technology, although more recently the United States has loosened the rope, allowing Nvidia can sell its H200 chips to China… under certain conditions. It is a clear movement resulting from “if China is going to have the technology anyway, let’s take advantage while we can.” And it is because Huawei is working on a open alternative to Nvidia’s CUDA technologythe real ace up the company’s sleeve. Because it is no longer about technical muscle, but about the “language” that the AI ​​speaks. And when China manages to develop this “interpreter”, that is when they will have taken the real leap forward in the development of their tools and in the search for that sovereignty. Images | BirenMoore In Xataka | Big tech is starting to pawn grandma’s jewels for AI: it’s a worrying symptom

Google’s secret weapon against CUDA dominance is called TorchTPU. And it’s an NVIDIA waterline missile

Google has launched an internal initiative called “TorchTPU” with a singular goal: to make their TPUs fully compatible with PyTorch. For the not so initiated, we translate it: what Google intends is to destroy once and for all the monopoly and absolute control that NVIDIA has with CUDA. Why is it important. NVIDIA has become the first company in the world by market capitalization for two big reasons. The first, for its AI GPUs. And the second, much more important, for CUDAthe software platform that is used by all AI developers and that has an important peculiarity: it only works on chips from NVIDIA itself. So if you want to work in AI with the latest of the latest, you have to jump through hoops… until now. What happens with Google and its TPUs. Google’s Tensor Processing Units (TPUs) were until now optimized for Jax, Google’s own platform that was similar to CUDA in its objective. However, the majority of the industry uses PyTorch, which has been optimized for years thanks to the aforementioned CUDA. That creates a barrier to entry for other chipmakers, which face a huge bottleneck in attracting customers. Goal is in the garlic. Anonymous sources close to the project indicate in Reuters that to achieve its goal and accelerate the process Google has partnered with Meta. This is especially striking because it was Meta who originally created PyTorch. Mark Zuckerberg’s company has ended up being just as much a slave to NVIDIA as its rivals, and is very interested in Google’s TPUs offering a viable alternative to reduce its own infrastructure costs. Google as a potential AI chip giant. The company led by Sundar Pichai has made an important change of direction with its TPUs, which were previously reserved exclusively for it. Since 2022, the Google Cloud division has taken control of their sale, and has turned them into a fundamental revenue driver because they are no longer only used by Google: Tell Anthropic. A spokesperson for this division has not commented specifically on the project, but confirmed to Reuters that this type of initiative would provide customers with the ability to choose. All against NVIDIA. This alliance is the last attempt to put an end to that great ace in NVIDIA’s sleeve. In these months we have seen how companies like Huawei prepare your own alternative ecosystem to CUDAbut they also participate in a joint effort of several Chinese AI companies for the same purpose. Hardware matters, software matters more. CUDA has become such a critical component for NVIDIA that if other semiconductor manufacturers have not been able to compete with it, it is not because of their chips, but because they cannot support CUDA natively. We have a great example in AMDwhich has exceptional AI GPUs. In fact, they are superior to NVIDIA in certain sections, but their software is not as powerful. In Xataka | Google’s TPUs are the first big sign that NVIDIA’s empire is faltering

Huawei is building its own alternative ecosystem to CUDA. If it succeeds, NVIDIA will have a serious problem

When talking about NVIDIA, almost all the focus is on the hardware: the H100Blackwell, racksenergy consumption, nanometers… It is understandable, but it is a mistake. The defensive moat – the moat– NVIDIA is not the hardware. Is CUDA. CUDA is not an add-on to the chip, it is the de facto standard upon which most of the AI ​​code on the planet is written, optimized and debugged. Changing GPUs without changing CUDA does not exist. And switching from CUDA means rewriting years of work. That is why it is a moat. Why is it important. Huawei’s big bet is not to “make a Chinese H100.” It is to build a path for the developer to reach Ascend without feeling like you are changing planets. The restrictions are accelerating it. Exports have split the world in two: An ecosystem that revolves around NVIDIA. And another that China is trying to lift against the clock. In that second, Huawei is not just playing chips: is playing “ecosystem”in AI and outside of it. And therein lies the nuance: you can be years behind in chips and still reduce dependency if you get the software to swallow. In detail. Huawei is attacking the problem on three fronts, with a pragmatically Chinese logic: not to replace everything at once, but to open shortcuts. Native stack (CANN + MindSpore). It is your “pure” alternative: your own environment and your own tools to get the most out of Ascend. The cost today is high, there are complaints of instability, the documentation is rather messy, and the community is much smaller. PyTorch support. This is the most strategic move. Huawei does not try to make the world love its framework– Try to ensure that the world doesn’t have to leave PyTorch. torch_npu acts as an adapter to run PyTorch models on Ascend, but with one problem: it is not native and suffers with every PyTorch change. If PyTorch advances and your backend lags behind, the developer notices. Portability via ONNX. Here Huawei looks for its best window: inference and deployment, not training. ONNX works as a bridge format: you train where you can (often NVIDIA) and deploy to Ascend. It’s a less romantic and more useful approach: if shortages hit, moving inference to local hardware is an immediate relief. Between the lines. The real story is that Huawei is trying to replicate the “trick” that made NVIDIA great: turning its hardware into an experience. That’s why the tactic that explains everything appears: putting engineers in the client’s home to migrate code and optimize it. It is not scalable as a business model, but it is scalable as a transition model: you buy time while you mature tools, libraries and support. And there is another derivative: if China gets enough teams to adopt Ascend out of necessity, over time that can become habit and then infrastructure. Not because it is better, but because it is already integrated. Yes, but. Huawei has two limits that cannot be fixed with marketing: Hardware improvement rate: Roadmap analysis suggests relative stagnation and a gap that could widen, not close, if NVIDIA continues to accelerate cycles. Off-chip bottlenecks: memory (HBM), tools and industrial capacity. You can add “worse” chips, but you need to make a lot of them and build a lot of systems. And now what. If this movie continues, we will see two clear signs: Less hype of chips and more real migration stories: how many computers have moved to Ascendwith what frictions, with what performance losses. Less obsession with training in Ascend and more normalization of the hybrid pattern: I train where I can, I deploy where I must. NVIDIA will continue to be CUDA. Huawei is not “a chip.” It is an escape strategy. And the restrictions are the fuel that is making it inevitable. In Xataka | With HarmonyOS NEXT Huawei has achieved something incredible. Neither Samsung, Microsoft nor Mozilla achieved it Featured image | NVIDIA, Huawei

with smuggled NVIDIA chips, according to The Information

The Chinese artificial intelligence startup DeepSeek would have been training his next model with thousands of NVIDIA Blackwell chipsthe most advanced on the market and whose export to China is expressly prohibited by the United States. So The Information states itciting six sources close to the company, who claim that the chips would have arrived in the country through smuggling. ANDl alleged smuggling scheme. According to the media, the chips would have been acquired legally through data centers in countries where their sale is allowed. Once installed and inspected by NVIDIA or its authorized distributors such as Dell or Super Micro Computer, the servers would have been disassembled and the components would have been shipped to China in separate pieces, passing customs under false declarations. This method would allow no trace of the end user to be left. The response of NVIDIA. The company has flatly denied these accusations in a statement: “We have not seen any evidence or received notices of ‘ghost data centers’ built to deceive us and our OEM partners, which are then dismantled, smuggled and rebuilt elsewhere.” NVIDIA adds that, although this type of smuggling “seems implausible,” it investigates any information it receives about it. Why Blackwell chips are so valuable to DeepSeek. NVIDIA’s Blackwell processors began shipping in the final quarter of 2024, with companies like Google, Microsoft, and OpenAI being the first to receive them. These chips include specialized hardware to accelerate sparse computing (Sparse Computing), executing this type of calculations up to twice as fast as traditional methods. According to The Information, DeepSeek would have been using a technique called “sparse attention” that activates only certain parts of the model to respond to requests instead of the entire model, which significantly reduces inference costs. Blackwells would be especially useful for this approach, although their application in larger models is proving more complicated than anticipated. Geopolitical context. US President Donald Trump came to boast to Chinese leader Xi Jinping that Blackwell chips are “10 years ahead of any other chip” and that he would not allow China access to them. However, this week Trump authorized the sale of H200 chips from NVIDIA to China, a generation before the Blackwells, although Beijing is still considering whether to allow its acquisition. Of course, this measure could reduce demand for smuggled Blackwell chips in the Asian country. lThe difficulties of enforcing restrictions. Most NVIDIA chips are manufactured in Taiwan and sold through a complex network of distributors around the world. Jacob Feldgoise, analyst at the Center for Security and Emerging Technologies at Georgetown University account to the media that “the burden of proof to enforce and prosecute chip smuggling cases is quite high. Clear and convincing evidence is needed.” DeepSeek remains silent. The Chinese startup has not responded to the allegations. Previously, DeepSeek had trained its models with older NVIDIA chips: 10,000 A100 units stored by its parent company, hedge fund High-Flyer Capital Management, before US export restrictions took effect in 2022. The company’s research documents from last year indicated that they also had used hopper chipsthe generation immediately before Blackwell. DeepSeek faces several sticks from Washington: in April, the House Select Committee on the Chinese Communist Party published a report calling the startup “a profound threat” to American national security, accusing it of illegally using export-controlled NVIDIA chips. Qregulatory repression. NVIDIA confirmed this week that it has developed a verification technology location through software that could indicate in which country its chips operate, although it has not yet been launched. This tool would use the computing capabilities of your GPUs to monitor the performance and location of the processors. The company has clarified that this is read-only software that does not allow NVIDIA to remotely control the chips or disable them. “There is no off switch,” the company said. Cover image | DeepSeek, Xataka with Mockuuups Studio and NVIDIA In Xataka | If anyone thought that Europe had no role in the race for AI, Mistral has something to tell them

A Chinese startup claims to have created its own TPU to compete with NVIDIA. The only problem is that it is three years late

A Chinese startup called Zhonghao Xinying (known internationally as CL Tech) has come to the fore with a bold promise. The company claims to have developed an AI chip that not only circumvents Western intellectual property restrictions, but also outperforms NVIDIA’s A100 chip. Which is very good, but also a little bad. Chana arrives. The chip in question has been named “Chana”, and according to SCMP we are dealing with a GPTPU (General Purpose Tensor Processing Unit). Unlike NVIDIA GPUs, aimed at accelerating AI workloads, this is an ASIC, that is, an application-specific integrated circuit designed from the ground up for neural network workloads. promise. According to Zhonghao Xinying Chana, it offers up to 1.5 times the performance of the NVIDIA A100 based on the Ampere architecture. Not only that: it achieves that performance with 30% lower consumption. The startup highlights that the computational cost per unit would therefore be less than half of that offered by the A100 chips. A little history of the company. Behind Zhonghao Xinying is Yanggong Yifan, an engineer formed at Stanford and the University of Michigan. He worked on the development of several generations of Google TPUs and also on the development of Oracle chips, and in 2018 founded this startup in Hangzhou together with Hanxun Zhengan engineer who worked at Samsung for several years. They were joined by other engineers from Microsoft, Oracle, NVIDIA, Amazon and Facebook, they indicate. on Baidu. We are therefore faced with several of those cases of “boomerang talent” with Chinese engineers who are forged in the US and then return to China to create solutions for their own industry. Solutions that do not depend on the West. Yanggong affirms that its chip features “fully self-controlled IP cores, a custom instruction set, and a fully in-house computing platform. Our chips do not rely on foreign technology licenses, ensuring long-term security and sustainability from an architectural perspective.” But. Although the achievement is striking, it is necessary to put it in perspective. The NVIDIA A100 is a 2020 AI GPU, and even with the improvements that this Chinese startup promises, its performance is, for example, far from H100 chips with Hopper architecture that appeared in 2022. Not to mention of the latest Blackwell Ultra chipswhich are currently NVIDIA’s greatest exponent in terms of AI chips. There are also no details about who makes the chip, and one of the candidates it would be SMICwhich has 7nm technology. They are very far away, and they have another problem. The technical achievement of these engineers is certainly notable, but everything indicates that they are still far from what NVIDIA and its competitors are achieving. like AMD or Google with its recent TPU Ironwood. There is another element that works against them: Chinese manufacturers continue without having direct access to the most advanced photolithography on the market, and although it also there is progress from Chinese manufacturers in that sense, competing is certainly complicated without access to the most advanced technologies. Pressure. In 2024 the company achievement revenues of 598 million yuan (73 million euros) with a net profit of 85.9 million yuan, but in the first half of the year the income was only 102 million yuan and had losses of 144 million yuan. The firm has reached an agreement with its investors by which it will have to go public at the end of 2026, or else it will be forced to buy back shares. The financial pressure is therefore notable for the company, which must demonstrate in the coming months that its roadmap is truly competitive. In Xataka | China was no longer supposed to be able to get its hands on NVIDIA’s most advanced chips. Until he found a shortcut in Indonesia

In a financial carom, Google has stood up to NVIDIA, leaving an unexpected winner in the crazy AI race: Larry Page

NVIDIA promised them very happy being the best-positioned AI chip manufacturer. At least it was until Google has started making chips. This new scenario has excited investors, who have rushed to buy Alphabet shares, making your price goes up up to 6.3% from one day to the next, and accumulating an advance of more than 75% since its August price. This increase in the value of Google’s parent company has also coincided with a dip in Oracle’s valuation, which has caused chaos on the podium of the world’s largest fortunes. according to Forbes. What AI gives you, AI takes away. A few months ago, Larry Ellison, founder of Oracle rose as the second largest fortune in the world, overtaking Mark Zuckerberg. His fortune reached 291.6 billion thanks to the good growth prospects posed by the construction of the data centers for AI. In fact, the Oracle founder’s fortune grew so much that he was close enough to the unattainable Elon Musk as to threaten its position on that list. Just as AI raised Larry Ellison to become the world’s second-largest fortune, AI he has taken that place away to hand it over to Larry Page, who reaches that position with a fortune of 261.5 billion dollars. Google rises, Oracle falls. He Google stock rally contrasts with the downturn suffered by the main architect of the cloud infrastructure in which AI lives, leaving up to 6.79% of its price in recent days. This decline has meant that Ellison’s fortune, with a strong influence of Oracle on its income balance, has suffered, falling to $256.7 billion, being displaced to third position. That same stock market momentum of Google has taken another founding partner, Sergei Brin, to fourth position, with a fortune of 242.4 billion dollars, while Alphabet shares brought the company closer to a market capitalization of almost 4 billion dollars. Mark Zuckerberg and Jeff Bezos didn’t even see it coming. The most pronounced falls in recent months have been those of Jeff Bezos and, above all, Mark Zuckerberg, who, accustomed to remaining in the Top 3 of the greatest fortunes, fall to fifth and sixth position in the ranking of Forbes. The decline in Mark Zuckerberg’s fortune is especially striking, due to the poor performance of Meta shares in recent weeks. Interestingly, Meta shares have broken their downward trend following Google’s announcement to get into the semiconductor business for AI and the rumors that Zuckerberg could change NVIDIA processors for the Tensor Processing Unit manufactured by Alphabet. Larry Page and Sergei Brin: same company, different fortunes. Although Page and Brin co-founded Google and share control of the company through their shares, both millionaires do not own exactly the same number of shares, and that detail makes a big difference in their assets. According to public statements of Alphabet before the US Securities and Exchange Commission (SEC), between the two magnates they concentrate 87.9% of Alphabet’s class B shares, which grant 10 votes per title. However, the figures show that Page has just over 389 million shares, while Brin account with some 362.7 million of these shares, which makes Page the main beneficiary of the rally in the shares of the company they founded. Brin has been more generous with science. The key to this gap is that Sergei Brin has been much more active than Page in donating and selling part of his stake in Alphabet, and that has reduced his share package over time. Brin has been targeting large volumes of Alphabet and Tesla shares to research donations of treatment against Parkinson’s disease, bipolar disorder or autism, after being discovered a genetic mutation which made him prone to developing that disease. In Xataka | Larry Page and Sergey Brin founded Google and became millionaires. Now they are dedicated to collecting gigantic airplanes Image | Flickr (Fortune Global Forum, TED Conference)

NVIDIA, Microsoft and Anthropic have signed a new multi-million dollar agreement

Microsoft, NVIDIA and Anthropic have announced recently a series of strategic alliances that redistribute the map of power in the generative AI race. Anthropic will deploy its Claude models in Azure, Microsoft’s cloud, while committing to purchase $30 billion in computing capacity and contract additional capacity of up to one gigawatt. For their part, NVIDIA and Microsoft will invest up to 10,000 and 5,000 million dollars respectively in the startup. The triangular pact, in figures. Anthropic will have access for the first time to Microsoft Foundry, where its most advanced models (Claude Sonnet 4.5, Claude Opus 4.1 and Claude Haiku 4.5) will be available to Azure enterprise customers. With this, Claude becomes the only advanced model present in the three main cloud services in the world. Additionally, Microsoft promise maintain the integration of Claude into its Copilot family, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio. In parallel, NVIDIA and Anthropic establish their first collaboration of such caliber. To do this, they will work together in design and engineering to optimize the Claude models on future NVIDIA architectures, starting with systems Grace Blackwell and Vera Rubin. Microsoft looks for alternatives to OpenAI. This move comes just weeks after OpenAI will complete its restructuring towards a for-profit model and will renew its agreement with Microsoft. Although Microsoft maintains a 27% stake in OpenAI valued at about $135 billion, the new terms of the deal have relaxed some key elements of its exclusivity. And OpenAI can now collaborate with third parties and release open source models, while Microsoft no longer has the right to try to be its sole computing provider. According to The Vergethese changes in the relationship with OpenAI have precisely allowed Microsoft to close this pact with Anthropic. In fact, Microsoft had already been betting on Claude in some of its services, for example, in Visual Code, prioritizing Claude over GPT-5 in your model selector. It also recently added Claude Sonnet 4 and Claude Opus 4.1 to Microsoft 365 Copilot. Circular financing: money that comes back. As is customary in these AI macro-agreements, a clear circular financing dynamic. Microsoft and NVIDIA pump capital into Anthropic, which in turn commits to spending tens of billions on infrastructure provided by those same companies. In essence, some of the money invested returns as revenue from cloud computing services and specialized hardware. It is not a new phenomenon: in fact, Anthropic already has similar agreements with Amazon, which has invested 8 billion dollars and continues to be its main infrastructure provider, and with Google, which in recent weeks announced a pact to provide up to one million TPUs to the startup. These types of cross-investments have become the norm in the generative AI ecosystem, creating almost symbiotic relationships between companies to meet their computing and infrastructure needs. one gigawatt. Building a data center with that capacity could cost around $50 billion, according to industry estimateswith some 35 billion dedicated exclusively to AI chips. Although the figure pales compared to OpenAI’s Stargate project, which aspires to 500,000 million dollars In investing, Anthropic’s approach seems more pragmatic and execution-focused. The company led by Dario Amodei has gained ground in the business market with less media noise but with solid results. And its annualized revenue rate now reaches $7 billion, although like the rest of the AI ​​startups it continues to spend much more than it earns. Diversification. What is really relevant about this agreement is that it confirms a trend: that large technology companies are no longer betting everything on a single card in AI. Microsoft, which has invested billions in OpenAI since 2019 and made it the flagship of its AI strategy, is now expanding its portfolio with Anthropic. For its part, Anthropic demonstrates its ability to maintain multiple alliances without compromising its independence. It is the sensible option and the one that minimizes risks. Cover image | Microsoft In Xataka | Tim Cook’s end at Apple is approaching

SoftBank abandons NVIDIA in its prime. What comes next is the biggest bet in its history

SoftBank has sold its 32.1 million NVIDIA shares for $5.83 billion, completely liquidating its position in the chipmaker, according to CNBC. It has also divested part of its stake in T-Mobile for another 9.17 billion. Why is it important. The sale speaks of a radical strategy: SoftBank is abandoning the physical infrastructure (chips) to bet directly on the application layer (AI models). This is not necessarily a lack of trust in NVIDIA (although that is not a great sign), but an extreme concentration of capital in OpenAI, where it has committed up to $40 billion and leads the stargate project of 500,000 million for data centers. The facts. SoftBank announced profits of $16.3 billion in its fiscal second quarter, driven primarily by your investments in OpenAI through the Vision Fund. The fund earned 19 billion in the July-September period, offsetting losses in other positions such as another AI giant: Alibaba. Between the lines. This is not the first time that SoftBank has sold NVIDIA. He already did it in January 2019, then liquidating a position of 4,000 million acquired in 2017. That move, made when NVIDIA shares had fallen more than 50%, received a lot of criticism for its timing. Now it repeats the move, but in a radically different context: NVIDIA is at all-time highs and dominates the AI ​​chip market. The difference is that in 2019 SoftBank sold due to the need for liquidity after the WeWork fiasco. In 2024 he sells by strategy: he needs a lot of cash to finance his bet on OpenAI and he cannot do so without liquidating winning positions. In any case, the reading is clear: when it comes to AI, SoftBank believes more in the profitability of the models than in that of the infrastructure. The money trail. SoftBank has already invested 9.7 billion in OpenAI through Vision Fund 2 since September 2024. The company will lead the Stargate project with OpenAI, contributing 19 billion of the initial 100 billion (OpenAI will put in another 19,000). Each firm will control 40% of the project. To contextualize the magnitude: SoftBank’s total commitment to OpenAI (40 billion) is equivalent to almost seven times the value of the NVIDIA shares it just sold. The contrast. The really surprising thing is not that someone is selling NVIDIA at maximums, but that that someone is precisely SoftBank. Masayoshi Son He has built his reputation as one of the most aggressive investors in the tech world, known for holding positions even when the market turns against him and for doubling down on bets in times of uncertainty. This sale of NVIDIA, the most coveted asset of the moment in technology, would have made more sense coming from conservative funds or traditional institutional investors looking to secure profits. But SoftBank is not that type of investor. That it is precisely the Vision Fund that abandons the star AI stock says more about the magnitude of its commitment to OpenAI than about its vision of NVIDIA. Yes, but. SoftBank remains indirectly linked to NVIDIA. The Stargate project will rely heavily on NVIDIA chips for its data centers. The company also maintains its majority stake in ARM, whose architecture competes with NVIDIA’s in certain segments. In addition, Son’s record in big bets is lime and sand: the Vision Fund lost 27.4 billion in 2022 due to failed investments like WeWork (100 million invested) and FTX. OpenAI could be your great redemption. Or your biggest mistake. At stake. SoftBank’s bet represents a clear hypothesis about where value is captured in AI: not in making the chips that train the models, but in owning the models and the infrastructure that runs them. It is choosing to be OpenAI rather than being the provider of OpenAI. Time will tell if they were right to change picks and shovels for the mine itself. In Xataka | AI is a bonfire of money and the ‘big tech’ have just decided that they are going to add even more fuel to it Featured image | Wikimedia Commons, Wikimedia Commons

NVIDIA and OpenAI know that the AI ​​bubble can burst in their faces. His solution: let dad pay for the state

Too big to fail or, in English, “too big to fail.” It is a theory of economics and finance which argues that certain corporations, especially banks, are so large and so interconnected that their failure would have catastrophic consequences for the global economy and therefore must be rescued by governments. The speech gained traction in the 2008 financial crisis and is beginning to sound again from the mouths of NVIDIA and OpenAI, no less. Government support. At an event of WSJSarah Friar, CFO of OpenAI, stated that the company will not go public in the short term (she says until at least 2027) and that its priority is growth and investment in R&D, above profitability. The most striking part of his speech was when he said that they hope that the government will support the financing of future agreements related to data centers. That OpenAI is burning astronomical amounts of money to lead the AI ​​race is something we have been discussing for a long timebut it is the first time that they directly appeal to the state to guarantee it. Shortly after, Friar collected cable in a post on LinkedIn: “OpenAI does not seek government support for our infrastructure commitments. I used the word ‘support’ and that confused the message,” but the seed was already planted. Depreciation. OpenAI is closing deals to secure computing capacity. We have seen it with his alliance with NVIDIAwith amdwith Broadcom and more recently with amazon. The complexity of the situation is that the depreciation rates of AI chips remain uncertain. As it says Washington Post’s Gerrit de Vynck in XOpenAI is going to need the best chips to be at the forefront of the AI ​​market, but financing this demand is not the same if the life cycle of the chips is seven years, as if it is only two years. The money is flowing, the question is for how long. In this uncertain scenario, government support would act as a safety net so that banks and private equity firms would feel more comfortable and continue releasing billions for OpenAI. China will win. NVIDIA is also appealing for government involvement in subtle ways. In a Financial Times event in London, Its CEO Jenshen Huang has warned that “China is going to win the AI ​​race.” Their arguments are that China has more flexible regulation and government subsidies for the energy your data centers needthat It is not little. This energy advantage allows China to compete even if they cannot buy NVIDIA’s most powerful chips. Huang doesn’t say it directly, but it is a clear wake-up call: either you subsidize the energy our data centers need or China will win. The fear. The question has been hanging over the air for a long time: Are we witnessing a new bubble? The investor Michael Burry thinks soand he is not just any investor, he was the one who made gold when the real estate bubble burst in 2008 (the movie ‘The Big Short’ is based on his story). The thing is, Burry just bet short against NVIDIA, which recently It was valued at 5 billion dollars. Fear of the bubble continues to grow, according to a Coatue report and the number of fund managers who believe we are in a bubble increased to 54% in October, up from 37% in July this year. 48% of the S&P 500 index corresponds to AI-related stocks. Fountain: Bianco Research Numbers. The fear is not at all unfounded and all you have to do is take a look at the numbers. Account Tomás Pueyo in Uncharted Territories that the economy should be in recession, but the numbers show the opposite and AI is behind this growth. The S&P 500 index is through the roof and 48% of this growth corresponds to AI-related stocks. The share price is far above what it was in the dotcom bustall with ridiculous benefits. And that’s not all, the economic growth of the United States in 2025 is due almost entirely to the construction of data centers for AI. According to the Economist Jason Furmanwithout taking data centers into account, the GDP of the United States would have grown only 0.1% in 2025. The creator of the newsletter Today in Tabs He gave a very graphic example: “Our economy could be reduced to three AI data centers in trench coats.” Tightrope. Returning to OpenAI, its financial director assured the Financial Times that it could be profitable simply by stopping investing too aggressively since it has a “very healthy” margin structure. The thing is, they can’t do it. OpenAI needs to achieve AGI, its great promise and the only thing that could justify this insane investment. If it fails, will cause a shock wave that can impact NVIDIA, AMD, Oracle… and end up dragging down the global economy. The competition tightens, Anthropic is eating the business market’s toast and Google is not only winning every time more users with Geminireached record revenue in the last quarterwhile OpenAI lost $11.5 billion in the same period. It doesn’t look good. Images | Wikipedia In Xataka | NVIDIA will invest 100 billion in OpenAI so that OpenAI buys chips from NVIDIA. And it’s a disturbing sign

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.