Part 2/10:
One of the most striking concepts Aschenbrenner emphasizes is the rapid, exponential growth of AI capabilities, often described in terms of "orders of magnitude." He discusses what he terms the "trillion-dollar cluster," a supercomputing infrastructure requiring immense power and resources to facilitate groundbreaking AI models.
According to Aschenbrenner, the scaling of AI clusters will accelerate dramatically over the next decade.
By 2024, such clusters will demand around 100 MW of power and 100,000 high-performance GPUs, costing billions of dollars.
In 2026, this will expand to a gigawatt-scale cluster comparable to a large nuclear reactor like Hoover Dam, costing tens of billions and using around a million GPUs.