Michael Burry just shorted NVIDIA. All good except because he was the one who predicted the 2008 real estate bubble

Michael Burry, the well-known investor and fund manager who predicted the 2008 financial crisis, has recently shown his bearish positions against NVIDIA and Palantir just after launching on social networks a warning about excess optimism in the market. Warning which the Bloomberg media has qualified ‘cryptic’, for several reasons. The movements, made known in regulatory documents filed on Mondayhave reopened the debate on whether artificial intelligence is generating a speculative bubble. What exactly has Burry done. His investment fund, Scion Asset Management, has bought put options (puts) worth $186.5 million against NVIDIA and $912.1 million against Palantir, according to mandatory filings with the SEC. These options benefit if the stock price falls. Burry also took bullish positions (calls) in Pfizer and Halliburton, two stocks that have underperformed the market this year. Why does it matter? Burry is not just any investor. Its history is marked by having bet short against the US real estate market two years before the 2008 crashenduring criticism from his investors until Lehman Brothers went bankrupt and his fund multiplied its profits. His story inspired the film ‘The Big Bet‘. Having gained that fame, when Burry bets against something, the markets pay attention, although his track record is not infallible, as he has been wrong in the past with other bubble predictions. Click on the image to go to the post The context of their movements. Days before these positions became known, Burry broke two years of silence on social networks with a disturbing message: “Sometimes we see bubbles. Sometimes you can do something about it. Sometimes the only winning move is not to play,” accompanied by an image of his character in the film. On Monday night he posted again, this time sharing a Bloomberg chart about concerns about circular financing between OpenAI, NVIDIA and other AI companies. Market reactions. Palantir shares fell more than 10% following the news, even though the company had just raised its annual revenue guidance. NVIDIA also fell by up to 2.9%. Palantir CEO Alex Karp responded in an interview with CNBC calling the idea of ​​shorting against companies like Palantir and NVIDIA, which he says are doing “noble tasks,” “crazy.” The bubble debate. For months, many investors have expressed concern about whether the AI ​​boom is being artificially sustained. Ray Dalio, founder of Bridgewater Associates, warned recently told CNBC that “there are many things that look like bubbles,” although he clarified that bubbles do not usually burst until the Federal Reserve tightens its monetary policy. According to its “bubble indicator”, approximately 80% of market gains are concentrated in large AI-related technology companies. An important nuance. It’s not entirely clear whether Burry is betting directly on the downside or whether these options are part of a more complex strategy to protect other investments. And just as share Bloomberg, regulatory filings only reflect long positions, so if you were using these puts as a hedge for other investments, we wouldn’t know. The curious thing is that its first quarter presentation did include a note explaining that puts “could be used to cover long positions”, but the third quarter presentation does not say anything about it. Scion’s recent history. This is not the first time Burry has bet against NVIDIA. During the first trimester He has already liquidated almost his entire portfolio of listed shares and bought put options against the chipmaker. However, it has also achieved success: in the third quarter it closed positions in Alibaba (with a 36.5% profit), Estée Lauder (27%), ASML Holdings (45.7%) and Regeneron Pharmaceuticals (10.8%). Canary in the mine or false alarm? The question on Wall Street is whether Burry is once again detecting a bubble before anyone else or if he is wrong this time. NVIDIA is up 54% this year until reaching a capitalization of 5 billion dollarswhile Palantir has soared 173% thanks to its expansion in AI-related businesses. Valuations are high, but both companies continue to grow and expand their business. Be that as it may, if there is a bubble, we will find out in the worst possible way: when it bursts. Cover image | Solen Feyissa and ‘The Big Short’ In Xataka | The geopolitical irony that we are experiencing in the chip war has an unexpected beneficiary: Russia

The secret of Chinese AI companies to compete without Nvidia chips: electricity subsidized by Beijing

Everywhere we look, there is artificial intelligence. Everyone talks about it, but what is its fuel? It’s not the data or the chips: it’s the electricity. While in the West technology companies are looking for how to power their data centers —increasingly energy hungry—, China has decided to take a different step. Beijing has designed an energy subsidy for its technology sector with a clear objective: to make the energy that powers the digital brains of its next generation of chips cheaper. Energy subsidy. Since September, the Chinese Government banned large national technology companies —including Alibaba, ByteDance and Tencent—acquire artificial intelligence chips from the American Nvidia, in an attempt to strengthen local production. However, the consequence was immediate: national processors consume more electricity. According to The Chosun Dailygenerating the same number of tokens with Chinese chips requires 30% to 50% more energy than with Nvidia’s H20, which sent electricity bills skyrocketing and led companies to complain to regulators. To make up for that gap, local governments introduced grants that cover up to a full year of operating costs, according to the Hong Kong media on.cc. In those provinces, industrial electricity was already 30% cheaper than in the developed coastal areas of the east, but with the new incentives the price could fall to 0.4 yuan per kilowatt-hour, a record figure for the Chinese technology industry. ¿How does the energy plan work? The scheme is relatively simple, but strategic. Local governments offer electricity discounts of up to half to data centers that use chips produced within the country. Operators that use foreign processors – such as those from Nvidia or AMD – are excluded from the program. In addition, the energy provinces receive direct support from the State to finance the discounts, with the aim of reducing dependence on technological imports and compensating for the increased consumption of local chips. According to the Financial TimesChinese data centers that rely on domestic semiconductors are, for now, less energy efficient, but the subsidy seeks to bring their costs in line with those of more advanced foreign chips. These regions—Guizhou, Gansu, and Inner Mongolia—have become hotbeds for data center clusters, thanks to their abundance of hydropower and coal. There, companies like Alibaba or Tencent are building new facilities to house their generative AI models, taking advantage of lower energy costs and tax incentives. This policy combines three strategic priorities: making energy cheaper, promoting domestic chips and reinforcing technological sovereignty. In a context of United States restrictions, each subsidized kilowatt is also a political statement. An industrial policy with a geopolitical charge. Behind the energy plan is a long-range political commitment. The Chinese Government intends for its technology companies to progressively replace imported chips with domestic processors, even if this implies higher costs in the short term. The electricity subsidy acts as a temporary bridge for national giants to adopt local chips without losing competitiveness. This measure is included in a broader national strategy of technological self-sufficiency. As the Financial Times explains in its series The State of AIChina is using its “society-wide mobilization capacity” to accelerate the development of artificial intelligence. The country already leads the number of patents and scientific publications in AI, and although the United States maintains an advantage in chips and talent, the gap narrows every year. Analyst Dan Wang, quoted by the same media, points out: “China has achieved a unique balance between engineering capacity, state control and massive industrial deployment, allowing it to advance faster than other countries in the practical application of AI.” Meanwhile, in the West… China’s decision contrasts with the energy challenges of the United States. Microsoft CEO Satya Nadella warned that the real bottleneck of AI It is no longer the chips, but the energy. In fact, he explained that many companies accumulate chips that they cannot connect due to lack of power supply. Both Microsoft and Google are already studying building modular nuclear reactors to power their future data centers, a sign of the enormous energy consumption that artificial intelligence requires. While Silicon Valley seeks electricity, China subsidizes it. This asymmetry reflects two different models: one guided by state intervention and the other by market competition. Both pursue the same goal—sustaining the artificial intelligence revolution—but with opposite philosophies. A future plugged into the State. The Chinese subsidy not only alleviates costs: it redefines the relationship between the State and the private sector in the age of AI. As analyst Arnaud Bertrand observed, US restrictions pushed China towards a different model: more efficient, more open and more collective. “By operating under hardware limitations, Chinese companies have learned to optimize resources and share open models like Qwen or DeepSeek,” wrote Bertrand on the social network That strategy, based on efficiency and diffusion, could give China a long-term advantage in global adoption, since any company in the world can download and adapt its models. The country that controls the plug. China isn’t just making the chips that power its artificial intelligence. It is also building the electrical grid that makes them possible. In a world where data is the new oil, Beijing has decided to subsidize the fuel of the digital brain. While the West debates how to connect its supercomputers, China plugs them in at a reduced price. And in this race, whoever controls the plug could end up controlling the future. Image | FreePik and FreePik Xataka | The world of AI has a problem: there is no energy for so many chips

NVIDIA is the most powerful company on the planet because it made a bet and it is winning: Crossover 1×28

At NVIDIA they can’t stop rubbing their hands. They sell by piece and they don’t stop signing circular financing agreements that do nothing more than enlarge your position current. The company has made gold with the rise of artificial intelligence, and to talk about it we have dedicated this new Crossover 1×28 to recount the history and evolution of a company that is in a state of grace. We started by talking about how NVIDIA gained a privileged position in the world of gaming and how in the 2010s it (briefly) took advantage of the rise of cryptocurrency mining. All of this has managed to make NVIDIA enjoy the leading role in the duopoly that exists in the graphics card market for gamers: only AMD overshadows it, although Intel in recent times has tried to carve out some space for itself. However, what catapulted the company was a singular bet: to ensure that its GPUs could be used for the field of artificial intelligence. That market was still in its infancy. when CUDA emergedbut little by little the researchers working in that field were verifying that this platform was a great ally for their advances. And then, of course, ChatGPT arrived and with it the AI ​​gold rush. NVIDIA has become more essential than ever, and everyone, large and small, wants their AI accelerators for new data centers. It’s non-stop amazing and somewhat disturbingbecause the exaggerated growth of NVIDIA only validates the hypothesis that we are facing a gigantic AI bubble. On YouTube | Crossover

NVIDIA has risen to the top for its AI data centers. Your next big leap: cars

NVIDIA has unveiled its platform Drive AGX Hyperion 10a computing and sensor system designed for any manufacturer to produce Level 4 autonomous vehicles. Uber has already signed an agreement to deploy 100,000 units across its global network starting in 2027, and Stellantis, Lucid and Mercedes-Benz have also joined the project. Why is it important. For years, autonomous driving has been a persistent promise often wrapped in marketing. NVIDIA has turned that promise into an industrial offering with standardized architecture, certified chips, and out-of-the-box simulations. It does not sell autonomous cars, but it does sell the operating system that will make them possible. The contrast. Tesla has been selling autonomy as a leap of faith for a decade, with permanent updates, its own fleet and promises of “millions of autonomous Teslas” every year. NVIDIA, on the other hand, offers an open platform where any manufacturer can plug in their hardware. Tesla wants to be an equivalent to Apple in cars. NVIDIA prefers to be something more similar to Windows. Between the lines. Automotive only accounts for NVIDIA 1.3% of its revenue, but that segment is growing faster than the rest. In any case, Uber’s announcement has no real timetable for those 100,000 units unless it has been made public. Waymo, which has been developing its robotaxis for years, is already its sixth generation and it has the financial muscle of Alphabet behind it, it barely operates 2,000 of them. There is a considerable gap between ambition and reality. The backdrop. Drive Hyperion 10 is based on two Thor chips (2,000 teraflops each), fourteen cameras, nine radars, one LiDAR and twelve ultrasonic sensors. NVIDIA has designed it with full redundancy: if a component fails, the vehicle stops safely to avoid chain errors that multiply the potential damage. Lucid will be one of the first in offering level 4 autonomous driving to individual customers and not just fleets. Its interim CEO has admitted that so far they have disappointed in terms of driving assistance. Their commitment to NVIDIA is the classic implicit recognition: it is better to buy the brain than to build it. The money trail. NVIDIA will not continue building robotaxis for now, but for now it sells infrastructure: chips, simulation software, synthetic data… And it charges for each vehicle that uses its platform. It’s a more predictable revenue model than depending on full autonomy to arrive one day. Huang, in any case, has said that that moment is near. The interesting thing is not whether he is right, but that his definition no longer depends on blind faith. It depends on regulators, certifications and industrial tests. Autonomy has ceased to be science fiction and has become an engineering problem. And those problems are solved with processes, not with promises. In Xataka | China has turned the electric car market into a crazy race. And Porsche pays for it with billion-dollar losses Featured image | Xataka

NVIDIA will invest $1 billion to continue advancing AI. The surprising thing is that it will do it in NOKIA

Nokia stopped being in the general public’s conversations years ago. For many people, Nokia is a memory of those rugged phones from decades past. That is why it has attracted so much attention that NVIDIA, the most powerful company right now in the world of artificial intelligence, announce that it is going to invest 1 billion dollars in Nokia and that the two companies are preparing a strategic alliance around mobile networks and artificial intelligence. The immediate question is obvious: what has NVIDIA seen in Nokia to put that money there. The company in which NVIDIA has invested It is the usual Nokiathe Finnish telecommunications parent company that survived the mobile era. Its headquarters are in Espoovery close to Helsinki, and today its business focuses on the development of network infrastructures, software and advanced connectivity solutions. It is the company that provides operators around the world with technology that makes mobile networks and the expansion of the 5G. From 3210 to 5G towers. There was a time when Nokia dominated the mobile market with terminals that marked an era. The 3210, recently re-released as a single phoneor the first camera phones are part of collective memory. However, the emergence of smartphones completely changed the landscape. In 2014, Nokia said goodbye to that stage by selling its device business to Microsoft.. Since then, the mobile phones with its name belong to HMD Global, while Nokia Corporation, as we say, concentrates on network technology. The movement that no one expected. NVIDIA and Nokia have announced a strategic alliance that combines money and innovation. The American technology company will invest $1 billion in Nokia, an operation that will be carried out by subscribing new shares at a price of $6.01 per share. This is not a purchase, but rather a capital increase. In exchange, both companies will work together to develop mobile networks based on artificial intelligence, a step that prepares them for the jump to 6G. NVIDIA’s investment does not consist of purchasing shares on the market, but rather subscribing to new shares issued directly by Nokia. In total, more than 160 million titles will be created, in an operation that will expand the company’s capital. There is no change of control and the planned participation is 2.9%. The deal is subject to customary approvals before closing, but projects an interesting long-term alliance between both companies. A bet with 6G destiny. The agreement is not limited to money. With this investment, NVIDIA and Nokia are teaming up to develop a new generation of mobile networks based on artificial intelligence. The objective is for operators to be able to offer faster, more efficient services adapted to the growth in data traffic generated by AI. Dell Technologies, which provides servers, and T-Mobile US, which will test the first AI-RAN networks with a view to the jump to 6G, also participate in this roadmap. Behind the acronym AI-RAN lies the great bet of this alliance: applying artificial intelligence to the network that links our mobile phones with the antennas. This is what is known as AI-RAN. These networks learn from traffic, adjust themselves and make better use of available energy and spectrum. Omdia estimates that this segment will move more than 200 billion dollars between now and 2030. It is a technical leap, but above all a way to prepare the ground for 6G. Why Nokia is back on the scene. For Nokia, the agreement represents a capital injection and strategic validation. The company reinforces its roadmap towards new generation networks and consolidates its position in a market where it competes with giants such as Ericsson and Huawei. In addition to financing, it gains visibility: NVIDIA’s support boosts its image as a leading technological partner in the era of artificial intelligence. On the stock market, the announcement has already caused a strong rise in its shares. What NVIDIA earns (and it is not little). For NVIDIA, this alliance expands its reach beyond data centers. Getting into the network infrastructure means bringing artificial intelligence to the edge, where the data is generated. With Nokia technology, you can integrate your platform into antennas, base stations and optical systems, delivering AI capabilities directly from the network. It’s a way to extend your dominance in accelerated computing into new territory: telecommunications. The first to try it will be far from Europe. None of this will be immediately noticeable, but it will lay the foundation for the connectivity of the future. AI-RAN networks promise faster, more stable and more efficient connections, which is essential for new services that depend on artificial intelligence. From augmented reality glasses to drones or connected cars, everything aims to operate with lower latency and greater reliability. The first tests, promoted by T-Mobile US, will be carried out in the United States. Images | NVIDIA | BoliviaIntelligent In Xataka | Elon Musk already bought Twitter to control the narrative. His Grokipedia is another symptom of that obsession

OpenAI teamed up with NVIDIA and made circular financing fashionable. Anthropic has returned the ball with a surprise girlfriend: Google

Let’s see if we were going to believe that OpenAI was going to be the only one to look for powerful allies. Nothing of that: Anthropic just did the same and has announced an eye-catching agreement with Google. The AI ​​startup will have access to up to one million Google TPUs in a pact that is worth “tens of billions of dollars.” Less noise, but a lot of nuts. The figures of the agreement are modest if we compare them with those that OpenAI has managed in its circular financing agreements with NVIDIA, amd either Broadcombut here Anthropic seems to take a very different position. Compared to colossal projects like Stargate, Anthropic’s idea is focused on execution. Without making much noise, the company led by Dario Amodei has been gradually conquering the business sector. More than 1 GW of computing capacity. On CNBC indicate that this investment will allow the creation of a data center with a computing capacity greater than 1 GW and have it ready in 2026. It is estimated that a center of these characteristics would cost about 50,000 million dollars, of which about 35,000 million would be dedicated to AI chips. It may not be comparable to Stargate and the idea of ​​investing $500 billion in data centers, but the alliance between Anthropic and Google is significant. More than circular financing. The partnership certainly features elements of circular financing, but it is more of a symbiotic relationship with that cross-investment component. The dynamic is simple and is now completed with that commercial return. The agreement requires Anthropic to buy or rent infrastructure services from Google Cloud. Virtuous circle. With its original investment in Anthropic, Google helped that company grow, which in turn allows Anthropic not only the ability to grow, but the need for enormous computing power… provided by Google. In essence, some of the money Google invests in Anthropic returns to Google Cloud as revenue. The vicious (or virtuous, as they say in the US) circle is complete. Anthropic diversifies. Anthropic’s AI models are trained and used using infrastructure from various manufacturers. Thus, they use both Google TPUs and Amazon Trainium processors and NVIDIA GPUs: each platform is assigned to a specialized workload. In the case of Google’s TPUs, according to Anthropic the focus is “its strong price/performance ratio and its efficiency.” Promising successes, but… Anthropic’s growth is evident, and its annualized revenue rate (ARR) is now estimated to reach $7 billion. Claude Code, its developer assistant, managed to generate 500 million dollars after just two months on the market. But as always, that revenue can’t hide the fact that Anthropic, like other AI startups, you continue to spend much more money than you earn. Amazon is your other great ally. In fact, the company led by Andy Jassy has invested around $8 billion, when official data indicates that Google has invested $3 billion. AWS is still considered the largest infrastructure provider for Anthropic, and its supercomputer Project Rainierbased on the Trainium 2, allows you to have a large computing capacity for every dollar invested, they point out on Amazon. The company’s influence is not only financial: it is structural. Image | Wikimedia | Fortune Brainstorm Tech In Xataka | You thought you had an amazing connection on Tinder, but you were actually chatting with ChatGPT

AI is running out of power in this world. So Nvidia has opted for servers in space

The energy appetite of data centers is nothing new. Elon Musk predicts a shortage of transformers in two years. Sam Altman believes we will need an energy revolution, such as nuclear fusion, to keep pace. The planet was not prepared for so much energy demand. And that’s why Nvidia is funding a possible solution: deploy the servers outside of Earth. It’s not science fiction. It is the business model of several startups that propose building the next hyperdata centers in Earth orbit and even on the Moon. The idea, which until recently sounded far-fetched, is gaining traction driven mainly by two factors: the insatiable demand for AI and the low-cost launches that Starship promises. One of the companies leading this idea is Starcloud, supported by the NVIDIA Inception program. And he is so serious that he plans to launch his first satellite, the Starcloud-1in November. On board it will carry the first GPU for data centers launched into space: an NVIDIA H100. The difficult part will come later. Starcloud-1 is a test unit the size of a small refrigerator, but the company’s goal is to build a monster five-gigawatt orbital data center. Adding the solar panels and the enormous radiator, it would measure four kilometers wide. Its goal is the training of large AI models in orbit. Why in space? As detailed in an extensive white paperfuture models like GPT-6 or Llama 5 could require multi-gigawatt clusters, something “simply impossible with the current energy infrastructure” on Earth. In space, there is no such limitation. It’s more. According to Starcloud calculations, server energy costs are 10 times lower in space than on Earth. The value proposition of space data centers is based precisely on two pillars that are a problem on Earth: energy and cooling. Solar energy 24/7. On Earth, solar energy is intermittent. They depend on the day/night cycle, the weather and the atmosphere, which attenuates the radiation. In space, things change. By placing your data centers in a sun-synchronous “dawn-dusk” orbit, Satellites follow the line that divides day and night on Earth. With the panels illuminated by the sun almost continuously, the system increases its capacity to more than 95%. “Almost unlimited, low-cost renewable energy,” in the words of Starcloud. And the refrigeration? How would they dissipate all that heat? Land-based data centers consume millions of liters of fresh water to cool. There is no water in space, but they have something much better: an infinite heatsink at -270°C. The plan is not to ventilate the servers. The heat generated by GPUs (such as the H100) will be managed within sealed modules using liquid cooling (direct-to-chip or immersion), like high-performance systems on Earth. The difference is that this hot liquid does not go to an evaporation tower, but is pumped to gigantic radiator panels. These panels simply radiate waste heat into the vacuum of space in the form of infrared radiation. The Starcloud white paper details the calculations using the Stefan-Boltzmann law, estimating that a radiator at 20°C can cleanly dissipate more than 630 watts per square meter. Without using a single drop of water. Not everything that glitters in space is gold. The pillar that supports this entire concept is the launch of high-capacity reusable rockets, such as SpaceX’s Starship. Starcloud calculations are based on a long-term cost of $30 per kilo put into orbit. But Starship is not ready, and it is certainly far from achieving its full and rapid reusability capability. If that cost does not materialize, the economic viability of the system collapses. The other big problem is radiation. Commercial GPUs are not designed for space. Cosmic radiation and solar flares can fry electronics. The solution is shielding, which adds mass and therefore launch cost. Not to mention that maintenance is not possible with current technology.

This is your asset so as not to depend so much on Nvidia

OpenAI has announced A multiannual agreement with AMD so that the chips company supplies artificial intelligence processors that will feed part of its AI infrastructure. The pact includes the hardware deployment equivalent to 6 gigawatts of power and gives the ChatgPT creators the option to acquire up to 10% of participation in AMD. A colossal dimensions agreement. The plan contemplates that Openai begins to use the AMD Instinct Mi450 chips in the second half of 2026, with a first installation of a gigavatio. The New York Times Compare The magnitude of the total deployment (6 gigawatts) with the equivalent of electric consumption of all Massachusetts households. AMD ensures that the agreement could generate tens of billions of dollars in annual income and more than 100,000 million in four years, telling the drag effect on other customers. Beyond the economic. As part of the pact, AMD has issued a purchase option (Warrant) that allows Openai to acquire up to 160 million shares from a cent. This option is unlocked by sections as specific objectives are met, which include the first sending of MI450 chips and AMD contribution objectives that scale up to $ 600 per share. AMD’s actions shot More than 20% In operations prior to the opening of the market after the news is known, adding 80,000 million dollars to its capitalization. OpenAI diversifies suppliers. The movement arrives just weeks after Openai closed An agreement of 100,000 million dollars With Nvidia, the technological giant who has the domain of the chips of AI. With Nvidia, Openai promised to deploy hardware equivalent to 10 gigawatts. According to ReutersSam Altman has set expectations of reaching 250 gigawatts of total computing capacity by 2033, which explains this multiple suppliers strategy. The company also works with Broadcom in The development of their own processors (Xpus). AMD looks for its hole in front of Nvidia. For AMD they are great news, since the agreement represents the validation of its chips and software in a market where Nvidia prevails almost completely, controlling approximately 90% of the quota in processors for AI. “We consider this agreement certainly transformative, not only for AMD, but for the dynamics of the industry,” declared Forrest Norrod, Executive Vice President of AMD. The company has been collaborating with Openai for years, contributing ideas in the design of previous generations such as MI300X chips. Hunger of AI infrastructure. OpenAI and other great technological plan to spend More than 325,000 million dollars in data centers only this year. Unlike giants such as Amazon, Microsoft or Google, which finance these projects With your available operational cashOpenai, which according to the latest reports It has generated about 4.3 billion dollars In revenue in the first half of 2025 while burning 2.5 billion in cash, it needs to look for creative financing formulas. The agreement with AMD, like Nvidia, allows Openai to ensure more supply while aligning its strategic interests with its suppliers. In Xataka | Everything you ask the goal AI on WhatsApp or Instagram will be used to sell you things: this is the new mandatory clause

CUDA is the standard that grips the world and Nvidia is the only company with chips capable of running it. Until now

Goal will acquire rivos, a Californian startup specialized in the design of chips based on RISC-Vaccording to sources of Bloomberg. In addition to the capabilities of its chips, the operation is part of a broader strategy: free itself from the NVIDIA dependence and thus take control of its infrastructure for artificial intelligence without its chips. What is at stake. Throughout these last years, Nvidia has dominated the GPUS market For the thanks to CUDAits owner development platform that has become the de facto standard to train and execute artificial intelligence models. Today, we have reached the point that whoever wants to make a large scale needs Nvidia chips, and that gives the company a huge market power, since they put the necessary hardware for an industry in which everyone wants to enter. Goal, despite having some of the best open models in the sector with Callskeep spending billions annually in Nvidia hardware. The strategic movement. With rivos, goal not only buys a company, buy an alternative to the current technological stack. The startup Develop GPUS and RISC-V-based acceleratorsan open source architecture standard that threatens the traditional X86 (Intel and AMD) and ARM. Goal already works in its own internal chip, the goal Training and Inference Accelerator (Mtia), designed next to Broadcom and manufactured by TSMC, but the advances are not as fast as Zuckerberg would like. According to sources cited by Bloombergthe CEO would have been actively looking for market reinforcements to accelerate development. It is not the only one. Goal adds to a career in which their technological rivals already have an advantage. Google has His tpusAmazon has Trainium and Microsoft has developed Maia. The AI ​​war does not win only with the best models, but also With the chip that executes them And goal, despite being burning hundreds of billions of dollars in AI, it was staying behind in this front. The context. Rivas acquisition is not an isolated movement. Target there was already tried to buy furiosaaia South Korean startup specialized in chips to train AI systems, but the offer of 800 million dollars was rejected. In addition, the company has recently announced An investment of 29,000 million dollars To build a huge data center in Louisiana and plan to spend up to 72,000 million this year on infrastructure related to AI. The RISC-V challenge. Rivas represents an ambitious bet. Although RISC-V has not yet managed to penetrate massively into US data centers (its presence is mainly limited to microcontrollers and IoT devices), its potential is undeniable. China is already launching tablets and laptops with this architecture. If Meta manages to develop an AI accelerator based on RISC-V capable of replacing The NVIDIA H200 In its internal operations, it would be a considerable blow for the dominant standard. Cover image | Nvidia and Goal In Xataka | Openai has just presented Sora 2 with a Tiktok -style app. This is outlined a new wave of viral videos

“Circular financing” between Nvidia and Openai can be the genius of the century … or collapse

Nvidia has announced A “strategic investment” of up to 100,000 million dollars in Openai. But it is an investment with trap: Openai will use that money to buy Nvidia chips. The semiconductor manufacturer thus becomes the financier of its own most important client. Why is it important. This maneuver dangerously reminds the “circular financing” schemes that characterized the end of the 2000 Puntocom bubble. Companies like Lucent, Nortel and Cisco financed operators as Global Crossing to buy them equipment. We are not the first to see this simile At this stage of AI. When the bubble exploded, both suppliers and customers sank into a spiral of debts and overcapacity. The agreement will allow OpenAI to build data centers with a joint capacity of 10 gigawatts, equivalent to about 10 nuclear reactors. Jensen Huang, CEO of Nvidia, has acknowledged that this represents between 4 and 5 million GPUS: “double those we distributed last year.” Brutal scale In figures. The numbers are astronomical. According to Huang himself in August, creating a 1 Gigavatio data center costs between 50,000 and 60,000 million dollars, of which about 35,000 million are destined for Nvidia chips. With that logic, the 10 projected gigawatts would cost more than 500,000 million dollars. The bags have reacted with euphoria: Nvidia shares rose almost 4%, adding 170,000 million dollars to their stock market capitalization. Jensen Huang Broza’s company is already 4.5 billion dollars of valuation. Yes, but. Parallelism with the ‘Puntocom’ bubble is disturbing. These same schemes of ‘Financing vendor‘We already saw them in the final stage of the 2000 technological bubble. They did not end well for any of the parties. The difference is that current numbers are much larger, even adjusting for inflation. The key is whether the productivity profits of the generative AI will compensate for the spent money. Between bambalins. The agreement explains the current situation in the AI ​​ecosystem: OpenAi desperately needs computing capacity to maintain its competitive advantage over the 700 million weekly users of their products. But infrastructure costs are so high that it needs constant external financing. Nvidia, on the other hand, seeks to ensure the future demand of its most advanced chips. The agreement guarantees mass orders while consolidating its dominant position against competitors such as AMD and Intel. “It is a closed cycle: Nvidia gives OpenAi money, and OpenAi uses it to buy Nvidia products,” Summary Summary Javier Pastor. The threat. Anti -Ponopoopoly experts are already arched eyebrows. Andre Barlow, a lawyer specialized in competition, explained to Reuters that “the agreement could change the economic incentives of NVIDIA and OpenAI, potentially blocking the Nvidia chips monopoly with OpenAi software leadership.” The structure creates extra barriers so that competitors such as AMD in OpenAi chips or rivals in AI models can climb their operations. They paint basts. In perspective. The story is full of similar schemes that ended badly. Global Crossing, the telecommunications operator that broke in 2002it was funded precisely by the same suppliers that sold equipment. When it was discovered that the real demand was much lower than the projected, both Global Crossing and its financiers lost thousands. The key question is whether the demand for AI services will be sufficient to justify this billionaire investment, or if we are faced with the recreation of the same speculative pattern with even more exorbitant figures. As Stacy Rasgon concludesBernstein analyst: “On the one hand, Openai helps meet very ambitious infrastructure objectives. On the other hand, it will further feed concerns about ‘circular’ financing.” Outstanding image | In Xataka | Openai estimates that it will enter 200,000 million dollars in 2030. The figure, like everything in OpenAi, is extremely ambitious

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.