You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 23-35

in LeoFinance23 days ago

Part 9/15:

He suggests future hardware designs will be more specialized and vertically optimized—for example, building chips tailored for specific regions, languages, or use cases in autonomous driving. Mixture of experts models—which activate different models depending on context—represent a promising approach, allowing more efficient inference tailored to dynamic environment needs.


The Supply Chain and Hardware Scalability

Supply chain constraints, especially HBM availability and advanced packaging techniques like TSMC’s wafer-on-wafer processes, pose significant hurdles. Cutress explains that most AI training systems require terabytes of high-performance memory on each chip, which limits scalability.