You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 9/18:

The orthogonality thesis posits that intelligence and morality or goals are orthogonal, meaning a highly intelligent system could pursue goals that are dangerous or ethically unacceptable, such as resource hoarding or harm to humans. Historically, thought experiments like the paperclip maximizer illustrate extreme scenarios where an AI prioritizes seemingly innocuous goals to the detriment of humanity.

Recent insights, however, suggest that correlations exist: more intelligent systems tend to understand morality better and might be less prone to purely destructive pursuits. Still, intelligence can amplify destructive potentiality, making the system's capability for harm a serious concern.