xAI

xAI raising $6 billion to buy 100,000 Nvidia chips to train next Grok version

Published

on

Generative AI firm xAI is raising up to $6 billion to add 100,000 Nvidia AI chipsets to its data center reports CNBC.

The company will close this funding by early next week. It consists of $5 billion from sovereign funds in the Middle East and $1 billion from other investors.

Advertisement

This new fund will be used to buy 100,000 Nvidia chipsets for its Memphis supercomputer cluster. It’s also mentioned that the data center will share computing capacity for Tesla’s Full Self Driving (FSD).

Last November, xAI announced Grok, its large language model (LLM), and integrated into the social media site X. This chatbot is available for the pad users and soon it’s planned to launch a free version.

Advertisement

In July, xAI announced the completion of the supercomputer cluster named Colossus. It features 100,000 Nvidia H100 chipsets located in Memphis, Tennessee. It uses a Nvidia Spectrum-X Ethernet networking and Supermicro liquid cooling system.

This new supercomputer was built by xAI and Nvidia in 122 days, a project that may take months even years to complete. Interstingly, xAI took only 19 days to rollout all architecture and devices onto the floor and start training.

Advertisement

In August, xAI launched the Grok 2 LLM family with the ability to generate images with descriptions. The company also improved its response speed to provide a fast user experience.

Furthermore, the version can explain images and screenshots. You can upload and ask for an explanation or convert a code diagram into code text and run it in real-time.

Advertisement

Grok 3

The additional 100,000 Nvidia chipset capacity will be used to train the next Grok version – Grok 3, which is expected to rollout by the end of this year or early 2025.

These new chipsets will enhance the overall capacity and performance of new models and services in the near future.

Advertisement
Comments
Exit mobile version