xAI

xAI Grok 2 goes under fine-tuning for the upcoming release

Published

on

AI company xAI is preparing Grok 2 large language model (LLM) with fine-tuning and fixing bugs for an upcoming release. Founder Elon Musk said the model could launch next month. This timeline matches the previous report on the model’s release.

Grok is a generative AI chatbot integrated into X social media site for text conversation. It was first released in November last year and received its first major upgrade in March this year with a v1.5 update. It improved reasoning capabilities with a new context length of 128,000 tokens.

Advertisement

Its 1.5 vision model can provide the capability to process different types of documents including screenshots, images, diagrams, and more. In the meantime, Grok 2 is under training and will advance its performance. To solidify this aspect, xAI is training the model with 24,000 H100 Nvidia chipsets.

H100 is the latest flagship chip and is quite popular in the generative AI industry. Upto 256 Nvidia H100 can be connected with NVLink Switch System to accelerate exascale workloads.

Advertisement

Inside, the chip features 4th gen Tensor Core and Transformer Engine with EP8 precision providing 4 times faster training over the previous generation for GPT-3 models. The 4th-generation NVLink offers 900 gigabytes per second of GPU to GPU interconnect.

Other than these, xAI will scale up this cluster with 100 H100 chipsets by this year. This will be used to train the next-generation Grok for even better performance. This cluster is believed to be the largest among Gen AI companies.

Advertisement

The company has also called off a business expansion with Oracle for new servers and decided to build the required architecture by itself to achieve maximum efficiency and fast data processing. Unlike Grok 1 and 1.5, the 2.0 version will bring significant upgrades. However, specific changes are yet to be revealed.

Advertisement
Comments
Exit mobile version