You are viewing a single comment's thread from:

RE: LeoThread 2025-10-20 16-44

in LeoFinance2 months ago

Part 4/12:

If a company only had 1,000 GPUs, the cost would be manageable—around $25 million in hardware—but training GRO 3 would take more than 3 years, an unacceptably long timeline given the rapid pace of AI innovation. 10,000 GPUs reduces training time to approximately 1 year, but at a hefty $250 million investment.

At 100,000 GPUs, the training time drops to about 100 days, aligning with XAI’s public estimates. Pushing further to 1 million GPUs could bring training times down to just days or weeks, enabling near-instantaneous iteration and model updating. Such scale is not just a feather in XAI’s cap but a fundamental enabler to accelerate the development and deployment of next-gen AI models.

The Economics and Strategic Implications