NVIDIA it’s been a couple of years being the mortar of the artificial intelligence industry. There is a picture that explains it better than a thousand words:


Their H200 chips feed the data centers that are used to train artificial intelligence and are object of desire even from some of the Chinese Big Tech, but they are already preparing a new generation called Rubin. And if there is anyone who is clear that these future chips should be the bricks of their new data centers, it is Mark Zuckerberg.
The reason? They are necessary to achieve “personal superintelligence”. And that belief is what has inspired a multi-million dollar agreement. Yes, another one.
NVIDIA’s future is bright for Meta
There is no specific figure, but in The Wall Street Journal there is talk of an agreement valued at “tens of billions of dollars”. Goal is chasing a type of iartificial intelligence focused on everyday usebeyond with a chatbot. They trust it so much that They have assembled the AI Team A and, to stop having promises and get products, they are going to make a all-in in future NVIDIA technology.
Jensen Huang’s company has GPUs like the H200 with Blackwell architecture, but they are already finalizing the development of something else:
- Your new Rubin architecture
- And the Grace CPU.
Grace is especially interesting because it marks the first massive deployment of NVIDIA CPUs based on ARM architecture. But it’s not just the GPU and the CPU: NVIDIA is going to provide all its ecosystem of hardware and software to Meta. “The complete NVIDIA platform”, as it has been called Huang.
And there is something curious about this whole thing that perfectly exemplifies what is happening in the artificial intelligence arms race: companies are buying hardware that doesn’t exist to power data centers that only exist on paper.
NVIDIA not yet is mass manufacturing its Rubin GPUs because it depends on Samsung provides HBM4 memories which are now starting to be mass produced. One of the leaders of SMIC, the great Chinese hope for semiconductors, described the process as “creating huge roads when there are no cars running on them yet.”
He also noted that “no one has really thought about what exactly those data centers will do, but companies would love to build the entire capacity of the next 10 years in just one or two years.”
As we said, there are no specific figures for this agreement, but Meta has dropped his wallet. In 2025 they invested 72 billion in AI and the forecasts were 115 billion for 2026. We say “were” because they have redesigned the plan to increase to 135,000 million to expand data centers and try to meet the Superintelligence Labs goals.
At Xataka we always try to provide context when we talk about certain quantities, but it is so exaggerated that I can’t think of how to contextualize it. Well, yes: 135,000 million only Meta, it is less than the 650,000 million that will be spent this year between Amazon, Google and Microsoft. There’s the context.
Images | NVIDIA, Mark Zuckerberg
In Xataka | Western Digital has sold all its hard drive capacity by 2026: AI is devouring physical storage


GIPHY App Key not set. Please check settings