You are viewing a single comment's thread from:

RE: LeoThread 2025-12-14 17-07

in LeoFinance3 days ago

Part 5/13:

Sutsatka's comments, made just months before his departure from OpenAI, underscore an alarming mindset: that a "rapture-like" event could occur when AGI is unleashed—a scenario where superintelligent machines could fundamentally alter or even threaten human existence. His actions, including building a bunker, are presented not as paranoia but as precautionary measures rooted in genuine concern over potential runaway AI development.

The Societal and Economic Ramifications