You are viewing a single comment's thread from:

RE: LeoThread 2025-02-03 09:39

in LeoFinance8 months ago

Part 3/8:

When discussing factual knowledge, such as the Tiananmen Square massacre, the challenge lies in how such history is represented in the training data. Models may filter out specific words or concepts—a process that raises significant ethical concerns. For example, if a model is trained without any reference to Tiananmen Square, it simply cannot generate valid responses related to that event. The deletion of facts from a model is not trivial; it involves extensive efforts to erase them from the dataset, which is an undertaking close to impossible.

Pre-training vs. Post-training Dynamics