Elon Musk's xAI Plans to Build Supercomputer for AI Chatbot Grok with Oracle Partnership

Elon Musk plans to build a supercomputer for xAI's Grok chatbot with Nvidia's H100 GPUs, aiming for completion by fall 2025.

In a recent presentation to investors, U.S. entrepreneur Elon Musk revealed that his artificial intelligence startup xAI is embarking on the development of a supercomputer to power the next iteration of its AI chatbot Grok, according to a report by The Information.

Musk expressed his ambitions to have the supercomputer operational by the fall of 2025. Additionally, there is the possibility of a collaboration with Oracle to bring this massive computing system to fruition.

Size and Dominance in the Market

The proposed configuration for the supercomputer involves connected groups of chips, specifically Nvidia's flagship H100 graphics processing units (GPUs), which are projected to be at least four times larger than the current largest GPU clusters, as highlighted in Musk's presentation to investors in May, The Information reported.

Nvidia's H100 family of GPUs currently holds significant dominance in the data center chip market for AI. However, these powerful GPUs are often challenging to acquire due to soaring demand.

Background and Future Requirements

Elon Musk established xAI as a direct competitor to Microsoft-backed OpenAI and Google's Alphabet, with the intent to further innovation in the AI sector. Notably, Musk also played a role in the founding of OpenAI.

Earlier this year, Musk disclosed that training the Grok 2 model necessitated approximately 20,000 Nvidia H100 GPUs. Looking ahead, the development of the Grok 3 model and subsequent iterations will demand a staggering 100,000 Nvidia H100 chips.

Share news

Copyright ©2025 All rights reserved | PrimeAi News