Nvidia’s new specialized chip aims to accelerate AI processing speeds

The new system for inference computing refers to a form of processing that allows AI models to respond to queries with greater speed and efficiency

By The News Digital
|
February 28, 2026
Nvidia’s new specialized chip aims to accelerate AI processing speeds

Nvidia has historically dominated the training phase of AI and is now set to launch a new processor designed to help OpenAI and other customers build faster, more efficient AI systems. The recent revelation marks a significant shift: the launching of a dedicated processor designed specifically for inference computing. This new system allows AI models to respond to queries with greater speed and efficiency.

Earlier this month, Reuters reported that OpenAI is dissatisfied with the speed at which Nvidia’s hardware generates answers for ChatGPT users, particularly for complex tasks like software development and integrating AI within other software. According to a source who spoke to Reuters, OpenAI’s goal is to acquire new hardware that will eventually handle about 10% of its inference computing requirements.

Advertisement

Moreover, OpenAI has been in talks with startups like Cerebas and Groq to provide chips for faster inference. In a strategic countermove, Nvidia successfully closed a $20 billion deal with Groq, effectively ending OpenAI’s negotiations with the startup.

While Nvidia previously committed up to $100 billion to OpenAI, that arrangement has recently been restructured into a $30 billion investment. This deal provides OpenAI with the essential capital for advanced hardware while securing Nvidia’s position as a primary stakeholder.

Ultimately, these advancements pave the way for increased chip, fostering long-term growth across the AI sector.

Advertisement