You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 7/13:

OpenAI champions the idea that bigger models will naturally lead to smarter AI. They rely heavily on reinforcement learning with human feedback (RLHF) to iteratively improve models based on user interactions. However, this approach is susceptible to biases—their models sometimes reflect the preferences or moral biases of their creators, leading to a contentious and, at times, inconsistent moral stance.

Anthropic: Ethics rooted in harm reduction