You are viewing a single comment's thread from:

RE: LeoThread 2025-01-28 12:24

in LeoFinance11 months ago

If Even 0.001 Percent of an AI's Training Data Is Misinformation, the Whole Thing Becomes Compromised, Scientists Find

NYU researchers discovered that poisoning just 0.001% of an LLM's training data with misinformation can lead to widespread errors, posing serious risks in medical applications. The study, published in Nature Medicine, found that even corrupted LLMs perform similarly to non-corrupted ones on standard benchmarks, making such vulnerabilities hard to detect.

#technology #ai #data