Part 4/16:
Doomers often argue that rapid, competitive development of AI increases risk. They posit that the "terminal race condition"—where companies and nations accelerate AI development in a bid to outpace rivals—can lead to catastrophic outcomes. Yet, the speaker notes that empirical evidence from the last two years counters this assumption. Since the launch of models like ChatGPT and initiatives like Max Tegmark’s AI pause letter, there has been no observed increase in safety risks due to faster AI development.