Part 6/9:
The timing of this announcement is particularly noteworthy. Just a few months after leaving OpenAI in May 2024, Sutskever has launched a company whose sole purpose is to develop superintelligent AI with safety as a guiding principle. This sharply contrasts with OpenAI’s position, which has allocated a significant portion of its compute resources (around 20%) towards alignment research. OpenAI estimates superintelligence might emerge around 2027, but Sutskever's initiative does not specify such a timeline, signaling perhaps a more aggressive or optimistic outlook.
Former OpenAI leaders like Sam Altman and others remain committed to building safe AGI, but Sutskever’s move signifies a more explicit and singular focus on superintelligence and safety.