You are viewing a single comment's thread from:

RE: LeoThread 2025-11-05 15-48

in LeoFinance21 days ago

Part 7/11:

OpenAI’s approach reflects this reality—current models aim to pattern-match rather than know. They do not have a built-in theory of mind or self-awareness regarding their knowledge gaps.


Is Fine-Tuning Worth It?

Despite its limitations, fine-tuning remains valuable for teaching patterns, formats, and specific tasks. For example, training a model to generate long-form fiction or specific code snippets.

But:

  • It is not suitable for creating an institutional knowledge base.

  • It requires significant effort and expertise.

  • It may still produce hallucinations and inaccuracies.

David emphasizes that prompt engineering generally outperforms fine-tuning for many user needs—it's less costly, faster, and more scalable.