
Meta Unveils Four New Chips to Power Its AI and Recommendation Systems
Meta has unveiled four new chips it designed to handle tasks like training and running AI models and serving recommendations across its social media platforms and other services.
The new chips are part of Meta’s Meta Training and Inference Accelerator (MTIA) family and are designed to be used in data centers. Meta has been designing its own silicon for a few years now, largely as a way to cut the cost of powering its AI and recommendation systems. The company says it needs custom chips to keep up with demand for AI-driven services.
Google, Amazon and Microsoft have also been designing their own AI chips as a way to avoid having to rely on components from other companies and to optimize their data centers for machine learning. A recent article about the global shortage of AI chips underscores the point, explaining that “tech companies are in a frantic rush for computing power to keep up with the increasing demands of artificial intelligence models.” The upshot of all this is that whoever has the best AI infrastructure may wind up owning the future of AI.
What the chips do
The MTIA chips are built to perform two primary functions. Training is the computationally intensive task of training an AI model on a dataset. Inference is the process of using a trained model to make predictions in real time. Meta’s custom chips are optimized for inference, which isn’t surprising given that the company’s core products revolve around recommendation algorithms.
Every time you like or comment on a post or scroll past a video, an AI model is making predictions about what you might want to see next. Analysts often say that recommendations are among the most intensive AI use cases in the world. For a look at how they operate across social media platforms, check out this recent story about AI recommendation algorithms. Optimizing those workloads can be the difference between a fast app and a slow one.
Why it matters
In a way, though, the details of the chips are secondary to a more important trend: AI isn’t just about software anymore, it is about computing power. To build leading-edge AI models, you need custom-built chips, massive amounts of energy and enormous data centers. Companies that can get a handle on that infrastructure gain a major advantage over everyone else.
Meta’s foray into custom chips is a sign that the next phase of the AI wars may be waged not just in AI research but in semiconductor design. Some analysts think that if companies can develop their own optimized hardware stacks, they’ll be able to significantly cut their costs and speed up the deployment of AI across a wide range of applications, from recommendations to voice assistants to the immersive digital worlds of the metaverse.
Right now, Meta’s announcement of four new chips might seem like a minor detail in the epic story of AI. But ask the people who work on this stuff, and they’ll tell you something different: Sometimes the key to unlocking AI isn’t in the algorithms, it is etched into the silicon itself.





