You are viewing a single comment's thread from:

RE: LeoThread 2025-11-04 23-07

in LeoFinanceyesterday

Part 7/13:

OpenAI, recognized for pioneering safety initiatives, committed in July 2023 to allocating 20% of their compute resources toward solving the superalignment problem. However, key personnel—like Ilya Sutskever and Jan Light—have since departed, leaving questions about whether current efforts are sufficiently robust or if internal disagreements threaten the pursuit.

Furthermore, the term Artificial General Intelligence (AGI) is reevaluated. Several experts prefer the broader term advanced AI, reflecting a spectrum of capabilities rather than a single threshold. This spectrum makes safety harder to define since there's no clear-cut line where AI moves from "benign" to "dangerous."


Regulatory and Industry Gaps