You are viewing a single comment's thread from:

RE: LeoThread 2025-10-18 23-22

in LeoFinance14 hours ago

Part 1/10:

Groundbreaking Research Reveals Small Samples Can Poison Large Language Models

Recently, a significant paper from Anthropic has sent shockwaves through the artificial intelligence community. The study uncovers that a surprisingly small number of malicious samples—sometimes just hundreds of documents—can effectively poison large language models (LLMs) of any size. This finding challenges the long-held assumption that an attacker needs to control a significant portion of the training data to compromise an LLM’s integrity.

The Power of Minimal Poisoning