You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 23-35

in LeoFinance23 days ago

Part 3/12:

In these full training cycles, Tesla leverages increased computational power—sometimes scaling up to over 30,000 Nvidia H100 equivalents—to produce markedly improved models. For instance, the transition from 12.3 to 12.4 likely corresponds to the first model trained on that heightened computing infrastructure, resulting in significant gains in safety, perception, and decision-making. Subsequent incremental releases, such as 12.3.1 to 12.3.3, focus on fixing bugs and refining specific components, keeping the core architecture stable while improving performance.