Meta has announced a multigenerational agreement with Nvidia to power its next-generation data centres, marking a significant expansion of the Facebook parent's dependence on the AI chip giant.
According to Business Insider, the deal will see Meta deploy millions of Nvidia Graphics Processing Units (GPUs) and next-generation Control Processing Units (CPUs) for AI training and inference, as well as networking equipment and confidential computing technology to enhance AI features in WhatsApp.
The collaboration extends beyond the conventional application of GPUs. Meta’s decision to combine Nvidia’s CPUs and GPUs in their systems means that the company is standardising their AI solutions with a single supplier.
CPUs, which have historically been the domain of Intel and AMD, are responsible for general computing, while GPUs are dedicated to AI computing. This makes it easier to manage infrastructure and may even cut down on the need for other suppliers such as Intel, AMD, and Google Tensor Processing Units (TPUs).
Meta’s adoption of Nvidia’s next-generation Vera CPUs, building on the existing Grace CPU platform, strengthens Nvidia’s role across both hardware and AI operations. Analysts say this move could erode Intel and AMD’s market share in data centres, as cost efficiency and power performance become critical for AI inference workloads.
Analyst Patrick Moorhead said the deal may slow Meta’s talks with Google on TPUs but reinforces Nvidia’s expanding AI infrastructure influence. Moreover, another analyst, Rob Enderle, noted that CPUs are cheaper and more energy-efficient for inference, and a unified vendor model simplifies management for CIOs.