You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 23-35

in LeoFinance23 days ago

Part 4/12:

This meticulous approach means that each major release (12.4, 12.5, 12.6) is essentially a fresh start with a clean dataset and the maximum available compute, optimizing model quality with each iteration. Each retraining cycle benefits from better data—thanks to continual vehicle fleet data collection—and more compute resources, culminating in progressively refined AI systems.


Scaling Compute for Smarter Models

A noteworthy point is Tesla’s capacity to amplify the computational resources dedicated to training. As detailed in shareholder presentations, the jump from a handful of H100 GPUs to over 15,000—or even 30,000—has been pivotal. This scaling not only accelerates training times but also enhances the model’s ability to handle complex, rare edge cases.