You are viewing a single comment's thread from:

RE: LeoThread 2025-10-20 16-44

in LeoFinance2 months ago

Part 3/12:

The scale of Colossus is unprecedented. XAI expanded the GPU capacity from 100,000 to 200,000 within three months, with ambitions to reach over 1 million GPUs by 2026 or 2027. This level of investment underscores their belief that enormous computational power is essential to compete with larger, more established labs.

The Cost of Progress: Why Such Massive Compute Power?

The question many raise is whether deploying hundreds of thousands of GPUs is overkill for a chatbot or similar product. To understand this, consider the training requirements of GRO 3, XAI’s flagship model released in February 2025. Training such a model demands over 100 septillion floating-point operations (FLOPS)—a staggering number.