You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 6/11:

In the context of safety, OpenAI admits that "what seems right in theory often plays out more strangely in practice", signaling an awareness of complexity and unpredictability in deployment. They propose a cautious approach of developing less powerful versions first to enable better understanding and minimize the risks of a "big bang" scenario where deployment leads to uncontrollable outcomes.

Shapiro perceives this stance as somewhat self-assured, reflecting the mentality that "it's up to us" to figure out safety and effective alignment. But he points out a concerning inference—that organizations like OpenAI may believe they are uniquely qualified to handle AGI's development, which could lead to overconfidence and neglect of broader collaborative efforts.