Part 2/13:
Max Winga from Control AI emphasizes that the dangers associated with AI are not just speculative but are taken seriously by leading experts. A pivotal 2023 statement from the Center for AI Safety, signed by top CEOs—including Sam Altman of OpenAI—and renowned AI scientists like Geoffrey Hinton, Joshua Bengio, and Ilia Sutskever, explicitly states that "mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war." This recognition marks a stark contrast from earlier dismissals and underscores the urgency of the issue.