Comment by meander_water
3 months ago
I don't think this is a bombshell finding. Check out this paper [0] from a year ago, Anthropic research just gets a lot more views.
> Our experiments reveal that larger LLMs are significantly more susceptible to data poisoning, learning harmful behaviors from even minimal exposure to harmful data more quickly than smaller models.
No comments yet
Contribute on Hacker News ↗