Part 4/15:
However, scaling models is not without limits. Increased size demands more compute resources during inference—which happens in real-time in the vehicle. Tesla must balance model size with hardware capabilities, aiming for a sweet spot—large enough to be superhuman but manageable enough to run at required inference speeds (around 30 cycles per second). This engineering tightrope involves careful tuning of hardware and software.
From Imitation to Reinforcement Learning: Advancing Safety and Performance
Historically, Tesla's FSD relied heavily on imitation learning (IL), where the neural networks are trained on vast datasets of human driving behavior. This supervised learning approach teaches the system to mimic the best drivers—those with safe, fluid driving styles.