You are viewing a single comment's thread from:

RE: LeoThread 2025-10-18 18-49

in LeoFinance6 days ago

Part 10/15:

Currently, many AI safety efforts focus on alignment—fine-tuning models to prevent harmful outputs. Miikkulainen suggests an alternative: develop AI with capabilities and awareness that allow it to evaluate actions internally, understanding both their risks and benefits. A trustworthy AI would not just follow programmed rules but would possess an ethical compass, enabling it to navigate complex moral landscapes.

However, he acknowledges the complexities of defining "good" and "bad" universally, especially given AI's siloed nature. Different systems are designed for diverse purposes—some for automation, others for decision support—making the responsibility decentralized and context-dependent.

Foundations, Knowledge, and the Future of AGI