You are viewing a single comment's thread from:

RE: LeoThread 2025-10-05 18:20

in LeoFinance3 days ago

Part 6/14:

He explained that typically, training models is resource-intensive, but inference (the real-time application of trained models) has become the bottleneck. "Most AI progress is limited not by data or algorithms but by the inference compute," he said. Grock’s LPUs aim to resolve this, focusing on inference efficiency, thus enabling more widespread and cost-effective AI deployment.

He metaphorically describes AI compute as more like a “rubberneck” than a bottleneck. This means that increasing compute capability can significantly amplify the overall economy and innovation without hitting the traditional energy or manufacturing constraints of the past.


The Strategic Race: Competing at the Top of the Tech Heap