You are viewing a single comment's thread from:

RE: LeoThread 2025-04-10 05:05

in LeoFinance8 months ago

More price-performant chips will help. But, inference will also get meaningfully more efficient in the next couple of years with improvements in model distillation, prompt caching, computing infrastructure, and model architectures. Reducing the cost per unit in AI will unleash AI being used as expansively as customers desire, and also lead to more overall AI spending. It's like what happened with AWS. Revolutionizing the cost of compute and storage happily led to lower cost per unit, and more invention, better customer experiences, and more absolute infrastructure spend.