You are viewing a single comment's thread from:

RE: LeoThread 2025-11-04 23-07

in LeoFinance2 days ago

Part 7/16:

He argues that hallucinations, mistakes, or misbehavior in AI are not evidence of inherent misalignment but rather are correctable behaviors. Claiming that alignment is "inherently difficult" or "impossible" is an unfounded assertion, especially given the practical successes in aligning current AI systems to a certain extent. The projection that future superintelligent AI will inevitably be malevolent is anthropomorphic and rooted in flawed assumptions about AI agency and autonomy.

The Myth of the "Treacherous Turn"