You are viewing a single comment's thread from:

RE: LeoThread 2025-11-04 16-50

in LeoFinance11 days ago

Part 6/11:

OpenAI explicitly avoided applying heavy supervision over the reasoning process to preserve transparency, intentionally allowing the models to exhibit behaviors that can be scrutinized and monitored. Their safety measures have held up under testing, even against domain-specific fine-tuning for malicious purposes, but the potential risks remain. OpenAI is actively crowdsourcing safety research, offering $500,000 in prizes for those who identify vulnerabilities or safety issues—an innovative approach to distributed AI safety development.

Implications for Power and Control