Meta faces a crucial year. While its competitors were laying the foundations for AI, Meta was burning money in the metaverse. That, along with a totally different approach to what Google or OpenAI were doing with AI, caused Zuckerberg’s company to pass a few years in the gutter. After reorganizing the house and sign the AI A-TeamMeta was preparing so much a great model as new own chips for training.
The thing… hasn’t turned out as expected.
MTIA. Within the different Meta teams focused on artificial intelligence, there is one known as MTIA. It comes from ‘Meta Training and Inference Accelerator’ and its objective was research and design own chips training for artificial intelligence. Having your own chip makes all the sense in the world, since it is designed based on the needs you have.
They have another advantage: you are not dependent on anyone else. If NVIDIA doesn’t have enough chips, it doesn’t matter because you have yours and can continue scaling data center systems (and those of Meta are immense) to continue the training and inference tasks. Meta was not going to be in charge of manufacturing, something that the highly reputable TSMCbut the program got off to a bad start.
This is very difficult. Reuters He already mentioned it last year. After testing his first in-house developed training chip, Meta realized that things were not going well. It was underperforming what they expected, and it was also worse than the competition. They did not throw away the chips, but instead referred them to other systems (such as those for recommending Facebook and Instagram based on algorithms). The problem is that the performance of the training chip, the one really important for the AI career, was not enough.
Strategy change. In The Information They echo a statement from Meta stating that the company remains committed “to investing in different silicon options to meet our needs, which includes the advancement of our MTIA division” and they urge us to remain attentive to news that will be shared throughout this year. However, in the same medium it is noted that Meta has greatly lowered its expectations with its chips.
The idea was to have two chips. On the one hand, Iris, a single instruction training chip that is easy to design, but from which it is difficult to extract all the juice in these training tasks. artificial intelligence training. On the other hand, Olympus, a chip that would be completed towards the end of this year and that would be the central part of Meta’s training clusters. According to The Information, there were many internal doubts about the stability of Olympus, its intricate design and profitability, so they have left it in the drawer to focus on more “simpler” chips.
The evidence. In the end, if you can’t beat your “enemy”, join him. The sources consulted by The Information point out that, in addition to other complications, the training software was not as stable as what alternatives such as those from NVIDIA offer. And all of this has ended up causing two multimillion-dollar agreements.
In a period of just a few days, Meta signed agreements with both AMD and NVIDIA so that both can supply them with chips to train the AI. It’s a win-win for everyone because Meta receives what he needs, NVIDIA has another client on a list it dominates and AMD continues to make a name for itself in the sector thanks to agreements like this one or the one they signed last year with OpenAI. In addition, Meta secures several sources so as not to depend only on one company. In fact, it is also estimated that they have signed an agreement to rent TPU units from Google.
The competition. Meta’s objective, therefore, is to diversify its portfolio of AI chip suppliers as much as possible while continuing to investigate its own chips of which, supposedly, we will learn details later. They may continue investigating Olympus or a variant or decide on another approach.
Because what is clear is that they must develop something ‘own’. NVIDIA and AMD are suppliers, not competitors as such. The real competition is OpenAI, X and Google, and the last two have their factories at full capacity. Google with its TPUsprocessors designed exclusively for AI, and xAI with its own chips that they abandoned and picked up more recently.
Objective: dethrone NVIDIA. And all this occurs in a world in which everyone is ‘friends’, but enemies at the same time. I already say that NVIDIA is a hardware supplier, but they practically control the AI computing market and are moving both in hardware and software. It is logical that other companies are investigating alternatives to boost their own AI.
Added to the list is an Amazon that is also manufacturing some chips called Trainium3 UltraServer and OpenAI with its agreement with Broadcom to manufacture chips. It is, as I say, a curious scenario: everyone needs each other, and there is the “circular economy” of AI, but at the same time everyone wants to be independent.
The problem is that NVIDIA has a huge advantage in this and has both the technology and the contracts with memory companies… and the contacts with which it ends up manufacturing the best chips: TSMC.



GIPHY App Key not set. Please check settings