The comments come after DeepSeek last week released R1, which is an open-source reasoning model that reportedly outperformed the best models from U.S. companies such as OpenAI. R1's self-reported training cost was less than $6 million, which is a fraction of the billions that Silicon Valley companies are spending to build their artificial intelligence models.
Nvidia's statement indicates that it sees DeepSeek's breakthrough as creating more work for the American chipmaker's graphics processing units, or GPUs.
"Inference requires significant numbers of NVIDIA GPUs and high-performance networking," the spokesperson added. "We now have three scaling laws: pre-training and post-training, which continue, and new test-time scaling."