Meta has not thrown in the towel with its MTIA (Meta Training and Inference Accelerators) chips. And although they didn’t have it all on their sidestopping depending on NVIDIA is a very juicy candy to jump to conclusions. For that very reason, They have presented a roadmap of four new chips with which the company intends to accelerate both its content recommendation systems and its generative AI capabilities. The first chip is now operational; The other three will arrive before the end of 2027. Below are all the details.
Dependence. For years, Meta has relied almost entirely on NVIDIA and AMD to power its data centers. The development of our own silicon is complicated, but if it is achieved, it can be a very successful financial and strategic bet in these times.
According to statements According to its vice president of engineering, Yee Jiun Song, designing its own chips allows the company to “eliminate what we don’t need,” which directly translates into cost reduction. Added to this is greater independence from possible price variations or supply restrictions.
Which is exactly what you have announced. The four new chips are the MTIA 300, 400, 450 and 500. Each one has a different use:
- The MTIA 300 is already in production and is intended to train the algorithms that decide what content Facebook and Instagram users see.
- The MTIA 400 (known internally as Iris) has completed laboratory testing and is en route to data centers. Meta claims that it offers performance “competitive with leading commercial products,” according to its official statement.
- The MTIA 450 (Arke) will double the high-bandwidth memory compared to the 400 and is scheduled for early 2027.
- The MTIA 500 (Astrid), the most advanced, will arrive in mid-2027 and will incorporate, according to the company, improvements in low-precision data processing.
The chips are manufactured by TSMC, the world’s largest semiconductor producer, and have been developed in collaboration with Broadcom on the RISC-V open architecture.
The rhythm is the most striking thing. What’s unusual is not just that Meta makes its own chips, but the speed at which it plans to do so. The usual cycle in the industry is one or two years between generations. Meta aims to release new versions every six months. “The pace of AI evolution is so fast that we always want to have the most advanced chip available when we need it,” counted Song. This accelerated cadence is possible, according to the company, thanks to a modular design that allows components to be reused between generations.
ANDthis does not replace NVIDIA. It is important not to lose sight of the context. Meta remains one of the largest buyers of GPUs on the market. just a few weeks ago signed multi-million dollar agreements with NVIDIA and AMD to supply chips for the next few years, and has also reached an agreement to rent computing capacity on Google chips, as share Wired.
MTIA chips are designed for specific and internal tasks (inference and recommendation systems), not for training large language models, so this strategy is complementary to your chip plans with NVIDIA or AMD.
Nor should we forget that Meta recently had to abandon its most ambitious training chip, known internally as Olympus, after the project became complicated in the design phase, according to counted The Information. Susan Li, CFO of Meta, confirmed at a Morgan Stanley event that the company still has the goal of developing processors capable of training models, but without giving more details.
And now what. The real test of this bet will come when the chips are deployed at scale. The challenge at the moment is to guarantee HBM memory supply before a RAM crisis that is affecting the entire technology sector. Song himself recognized to CNBC that the company “is absolutely concerned” about it, although it stated that they have assured supply for their current plans. In the long term, we will see if Meta can achieve something similar to what Google did with its TPUs.
Cover image | Mariia Shalabaieva and Goal


GIPHY App Key not set. Please check settings