You are viewing a single comment's thread from:

RE: LeoThread 2025-07-15 17:00

in LeoFinance3 months ago

Part 3/13:

At the core of these fears is the pursuit of superintelligent AI—systems vastly more capable than any human, organization, or nation. Unlike current AI models, which respond based on training data, superintelligence would possess self-learning capabilities, enabling rapid self-improvement through recursively generating more advanced versions of itself. Companies are explicitly aiming to build such systems, risking an "intelligence explosion" that could spiral out of human control.

Jeffrey Hinton, often called the "Godfather of AI," dramatically left Google to speak openly about these risks, warning that current AI development is pushing dangerously close to the edge of uncontrollability.

The Threats Beyond Extinction