You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 3/11:

Shapiro points out that most large language models (LLMs) like GPT-3 don’t inherently possess an agent model—they are trained on human-generated text, which implicitly assumes a social agent with agency and self-awareness. When asked directly "what am I?" they often struggle or fall into repetitive loops, indicating a lack of true self-concept.

The key insight is that without explicit instruction, these models lack a form of self-awareness or agenthood and tend to confabulate or produce inconsistent answers. To remedy this, one approach is to explicitly program or prompt the models with an agent model—defining who and what they are—to foster a more stable and autonomous sense of identity.


Implementing an Agent Model with Goals and Imperatives