You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 7/12:

The assumption that sufficiently advanced agents will generate human-level hypotheses about their environment doesn't match current models like GPT-3, which implicitly encode knowledge without explicit world models. Also, the idea of separate world models might be outdated given how large language models already embed a vast understanding of the world.

2. Planning Under Uncertainty

The notion that AI will rationally understand costs and benefits of learning assumes an idealized agent. In reality, learning is intrinsic and continuous; systems should learn automatically rather than selectively. Human cognition exemplifies continuous learning driven by necessity, not explicit choice.

3. Goals and Biases