Tesla

Tesla will use Cortex Supercomputer for FSD v13

Published

on

Tesla will deploy the Cortex Supercomputer cluster for Full Self Driving (FSD) v13 coming in November. This integration will bring major improvements in computing performance for model training.

Since 2024, Tesla has been working on Gigafactory site expansion to house a new AI data center. This new cluster will increase computing capacity for FSD, robotics, and other products.

Advertisement

The existing FSD training system includes 35,000 Nvidia H100 GPUs and later 15,000 more to increase the training capability and a total of 50,000 GPUs are currently in action.

While announcing the cluster name “Cortex”, Tesla CEO, Elon Musk briefed that this supercomputer will feature 50,000 additional H100 and H200 chipsets with a large storage for model training.

Advertisement

These GPUs will rollout gradually to the computing architecture with new feature releases and product development.

Cortex is expected to boost computational power by 5 times compared to the existing training architecture. Increased computing power will allow Tresla to train large and more complex models. This will result in improved performance and perception for FSD and humanoid robots.

Advertisement

Training is an important part of large language models and Tesla has a lot of training data from its existing fleet coming through the servers.

However, it’s important to utilize that data as fast as possible and that’s where the Cortex could play a significant role. Improved computing will also train the driving models faster and feed more data for better self-driving.

Advertisement

FSD V13

This version is expected to be the biggest upgrade in FSD’s history improvements in self-driving, drive control, object detection, safety, and more.

The company will start a limited testing for FSD v13 by next week and a wide rollout is expected around Thanksgiving.

Advertisement
Exit mobile version